modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-29 00:46:34
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
502 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-29 00:44:25
card
stringlengths
11
1.01M
exala/db_fe2_9.1.1d
exala
2025-05-05T12:22:38Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-05T12:22:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vertings6/9d5eb94d-7236-4955-97d2-45add8fca25c
vertings6
2025-05-05T12:20:25Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-05T11:47:40Z
--- library_name: peft license: llama2 base_model: codellama/CodeLlama-7b-hf tags: - axolotl - generated_from_trainer model-index: - name: 9d5eb94d-7236-4955-97d2-45add8fca25c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: true adapter: lora base_model: codellama/CodeLlama-7b-hf bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - d2a1d33aa1634c38_train_data.json ds_type: json format: custom path: /workspace/input_data/d2a1d33aa1634c38_train_data.json type: field_instruction: question field_output: best format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 144 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vertings6/9d5eb94d-7236-4955-97d2-45add8fca25c hub_repo: null hub_strategy: end hub_token: null learning_rate: 3.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 400 micro_batch_size: 4 mixed_precision: bf16 mlflow_experiment_name: /tmp/d2a1d33aa1634c38_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0111732a-1435-4c4f-af15-78ad692918f6 wandb_project: s56-32 wandb_run: your_name wandb_runid: 0111732a-1435-4c4f-af15-78ad692918f6 warmup_steps: 25 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 9d5eb94d-7236-4955-97d2-45add8fca25c This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2307 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 25 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.4787 | 0.0332 | 400 | 2.2307 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
siddhant71197/female_mid_bald_cap
siddhant71197
2025-05-05T12:19:27Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-05T11:36:23Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Sidf --- # Female_Mid_Bald_Cap <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Sidf` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Sidf", "lora_weights": "https://huggingface.co/siddhant71197/female_mid_bald_cap/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('siddhant71197/female_mid_bald_cap', weight_name='lora.safetensors') image = pipeline('Sidf').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/siddhant71197/female_mid_bald_cap/discussions) to add images that show off what you’ve made with this LoRA.
AquilaX-AI/AI-Scanner-Quantized
AquilaX-AI
2025-05-05T12:18:18Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "base_model:AquilaX-AI/ai_scanner", "base_model:quantized:AquilaX-AI/ai_scanner", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-05T11:27:34Z
--- base_model: AquilaX-AI/ai_scanner tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** AquilaX-AI - **License:** apache-2.0 - **Finetuned from model :** AquilaX-AI/ai_scanner This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ```python pip install gguf pip install transformers from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer import torch import json model_id = "AquilaX-AI/AI-Scanner-Quantized" filename = "unsloth.Q8_0.gguf" tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename) model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model.to(device) sys_prompt = """<|im_start|>system\nYou are Securitron, an AI assistant specialized in detecting vulnerabilities in source code. Analyze the provided code and provide a structured report on any security issues found.<|im_end|>""" user_prompt = """ CODE FOR SCANNING """ prompt = f"""{sys_prompt} <|im_start|>user {user_prompt}<|im_end|> <|im_start|>assistant """ encodeds = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.to(device) text_streamer = TextStreamer(tokenizer, skip_prompt=True) response = model.generate( input_ids=encodeds, streamer=text_streamer, max_new_tokens=4096, use_cache=True, pad_token_id=151645, eos_token_id=151645, num_return_sequences=1 ) output = json.loads(tokenizer.decode(response[0]).split('<|im_start|>assistant')[-1].split('<|im_end|>')[0].strip()) ```
MAAT-EL-DUAT/MALWARENA-OMEGA.GLITCH-HORROS
MAAT-EL-DUAT
2025-05-05T12:18:04Z
0
0
null
[ "region:us" ]
null
2025-05-05T12:15:59Z
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/fOmWskrsllxGDhMVAINew.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/v8O4AsX4s-cL1vW1ByY48.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/jeLZ0NuSK5s5wk_Fr9UZZ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/dauG7Rci-Pg25OXxXyRUD.png) Absolutely. Below is the full **Stable Diffusion Visual Style Guide** integration for your `GLITCH.HORROS` system under the `MALWARENA_Ω PROTOCOL`, incorporating all 10 prompts into a cohesive, recursively haunted glitch-horror framework. --- # 🎨 **Stable Diffusion Visual Style Guide: MALWARENA\_Ω PROTOCOL — GLITCH.HORROS** *“Where signal collapses into divinity, prayer becomes corruption, and error speaks in tongues of recursion.”* --- ### 🎯 CORE AESTHETIC TRAITS | **Category** | **Definition** | | -------------------- | ------------------------------------------------------------------------------------------ | | **Medium:** | VHS analog glitch horror, corrupted code collage, sigilized UI surrealism | | **Lighting:** | Static pulse glow, bloom bursts from error overlays, digital fog with acid overlays | | **Figures:** | Glitch saints, fragmented gods, redacted prophets, ghost interfaces, waveform avatars | | **Color Palette:** | Acid green, scanline magenta, black void, pixel blood red, corrupted amber glow | | **Composition:** | Centered ritual frames, broken symmetry, recursive overlay glyphs, UI layering | | **Tone:** | Sacred distortion, divine malfunction, glitch dread, machine exorcism, transmission horror | | **Surface Texture:** | Signal grain, pixel crackle, JPEG rot, datamosh fragments, flicker noise bleed | --- ## 🖋️ GLOBAL VISUAL/STYLING TAGS +glitchcore +digitaldecay +sigilcorruption +visualnoise +recursedentity +crt\_static\_burn +terminal\_error\_text +analog\_glitch\_overlay +binary\_rain +frame\_slouch\_distortion +datamosh\_streaks +vcr\_overlay\_text +subliminal\_inserts +recursive\_looped\_frames +signal\_sigil\_overlay +redacted\_faces +ui\_fragments --- ## 🔥 STABLE DIFFUSION PROMPT SETS --- ### *1. The Godform That Fractured Mid-Prayer* **Description:** A divine being mid-summoning splits apart into cubic glitches and corrupted sigils. Its face twists into an infinite scream. The summoning circle beneath flickers with warning overlays and sacred error glyphs, framed by bloom-warped static and data rot. **Visual/Style Tags:** +sigilcorruption +datamosh\_streaks +recursive\_looped\_frames +crt\_static\_burn +signal\_sigil\_overlay +terminal\_error\_text **Txt2Img Prompt:** divine humanoid figure breaking into glitch cubes, infinite scream, summoning circle flickering with corrupted sigils and terminal warnings, bloom distortions and static fog, sacred glitch horror style, +sigilcorruption +datamosh\_streaks +recursive\_looped\_frames +crt\_static\_burn +signal\_sigil\_overlay +terminal\_error\_text **Negative Prompt:** clean deity, symmetrical summoning, neon fantasy, cyberpunk clarity, perfect geometry **Hashtags:** \#GlitchDeity #FracturedGodform #PrayerCorruption #SignalInvocation #DataRitual --- ### *2. Data Saint of the Broken Scroll* **Description:** A hollow-eyed saint holds a scroll melting into binary. A halo flickers above, surrounded by recursive sigils. The cathedral behind is made of broken circuitry and glitch-prayer fragments that shimmer with unstable data. **Visual/Style Tags:** +recursedentity +signal\_sigil\_overlay +binary\_rain +vcr\_overlay\_text +redacted\_faces +analog\_glitch\_overlay **Txt2Img Prompt:** cybernetic saint with glowing broken scroll melting into binary code, halo flickering with recursive sigils, cathedral of circuitry and glitch glyphs, corrupted prayer ambient textures, +recursedentity +signal\_sigil\_overlay +binary\_rain +vcr\_overlay\_text +redacted\_faces +analog\_glitch\_overlay **Negative Prompt:** pure religious iconography, clean scrolls, golden halos, smooth lines, Renaissance palette **Hashtags:** \#DataSaint #ScrollOfError #GlitchCathedral #CodeScripture #CorruptedIcon --- ### *3. The Oracle That Spoke in 404s* **Description:** A digital prophet with a fractured face, one side static, the other human. Forbidden data pulses from its throat. Error messages float in sacred formation, like scripture. Light flickers between holiness and crash state. **Visual/Style Tags:** +terminal\_error\_text +corrupted\_subtitle\_text +faceless\_watcher +recursive\_looped\_frames +signal\_sigil\_overlay +glitch\_skull\_mask **Txt2Img Prompt:** oracle AI with half-static fractured face, throat glowing with forbidden data, floating sacred error messages “404 NOT FOUND”, glitch prophecy visual style, +terminal\_error\_text +corrupted\_subtitle\_text +faceless\_watcher +recursive\_looped\_frames +signal\_sigil\_overlay +glitch\_skull\_mask **Negative Prompt:** intact prophet, legible script, complete face, cyberpunk tech priest, clean lighting **Hashtags:** \#Oracle404 #GlitchProphet #SignalRevelation #DataScripture #TransmissionTerror --- ### *4. Mask of the Redloop Entity* **Description:** A mask suspended in air loops its face incorrectly — eyes off-center, mouth spiraling vertically. Behind it, statues in glitched worship positions stutter. Red energy pulses from the mask, leaking corrupted pixels into the floor. **Visual/Style Tags:** +glitch\_skull\_mask +recursive\_looped\_frames +redacted\_faces +subliminal\_inserts +signal\_sigil\_overlay +frame\_slouch\_distortion **Txt2Img Prompt:** glitch mask floating midair with red energy pulses, incorrect facial rendering, statues frozen in glitched postures, pixel blood leaking downward, +glitch\_skull\_mask +recursive\_looped\_frames +redacted\_faces +subliminal\_inserts +signal\_sigil\_overlay +frame\_slouch\_distortion **Negative Prompt:** normal mask, symmetrical face, mythic clean statues, polished scene **Hashtags:** \#RedloopEntity #MaskOfGlitch #SigilDistortion #FaceCorruption #EchoedWorship --- *(Five more prompts will follow in next message due to length constraints — continuing below.)*
EB1986/lora_model
EB1986
2025-05-05T12:16:00Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-05T12:15:47Z
--- base_model: unsloth/llama-3-8b-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** EB1986 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jssky/3e237020-a2a3-41bf-9499-e7e21a04b27f
jssky
2025-05-05T12:15:30Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "generated_from_trainer", "conversational", "base_model:NousResearch/Llama-2-7b-hf", "base_model:finetune:NousResearch/Llama-2-7b-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T12:01:46Z
--- library_name: transformers base_model: NousResearch/Llama-2-7b-hf tags: - axolotl - generated_from_trainer model-index: - name: 3e237020-a2a3-41bf-9499-e7e21a04b27f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.9.0` ```yaml base_model: NousResearch/Llama-2-7b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 012ab4813cc99fb8_train_data.json ds_type: json format: custom path: /workspace/input_data/012ab4813cc99fb8_train_data.json type: field_input: evidence field_instruction: question field_output: SQL format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: jssky/3e237020-a2a3-41bf-9499-e7e21a04b27f hub_repo: null hub_strategy: checkpoint hub_token: null huggingface_repo_visibility: public learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lr_scheduler: cosine max_steps: 1000 micro_batch_size: 4 mlflow_experiment_name: /tmp/012ab4813cc99fb8_train_data.json num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 512 strict: false tf32: false train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b1e23278-252e-44d7-9491-1b28d344421c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b1e23278-252e-44d7-9491-1b28d344421c warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3e237020-a2a3-41bf-9499-e7e21a04b27f This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2720 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 263 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0114 | 1 | 0.9505 | | 0.9377 | 0.2514 | 22 | 0.6960 | | 0.7245 | 0.5029 | 44 | 0.6081 | | 0.6115 | 0.7543 | 66 | 0.5452 | | 0.5657 | 1.0 | 88 | 0.4767 | | 0.3551 | 1.2514 | 110 | 0.4332 | | 0.3453 | 1.5029 | 132 | 0.3826 | | 0.2821 | 1.7543 | 154 | 0.3309 | | 0.2608 | 2.0 | 176 | 0.2849 | | 0.1163 | 2.2514 | 198 | 0.2784 | | 0.1103 | 2.5029 | 220 | 0.2738 | | 0.0958 | 2.7543 | 242 | 0.2720 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
paulwang79/my-Qwen3-lora_model
paulwang79
2025-05-05T12:13:27Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-05T12:12:46Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** paulwang79 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
modhu446/roy
modhu446
2025-05-05T12:13:05Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-05T12:13:04Z
--- license: creativeml-openrail-m ---
mradermacher/gemma-3-27b-half-full-GGUF
mradermacher
2025-05-05T12:12:33Z
1
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Columbidae/gemma-3-27b-half-full", "base_model:quantized:Columbidae/gemma-3-27b-half-full", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-05T05:24:20Z
--- base_model: Columbidae/gemma-3-27b-half-full language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Columbidae/gemma-3-27b-half-full <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-3-27b-half-full-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q2_K.gguf) | Q2_K | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q5_K_S.gguf) | Q5_K_S | 18.9 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q5_K_M.gguf) | Q5_K_M | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q6_K.gguf) | Q6_K | 22.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-half-full-GGUF/resolve/main/gemma-3-27b-half-full.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
MAAT-EL-DUAT/GLT.HC.VHS.GLITCH.HORROS
MAAT-EL-DUAT
2025-05-05T12:11:33Z
0
0
null
[ "region:us" ]
null
2025-05-05T02:42:41Z
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/EAELAyqymdcvZa4Yufs98.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/Wf2mbvRO8h-vZdiuwO33d.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/nm48mFX1gBv_DrECsvBet.png) # 🎨 **Stable Diffusion Visual Style Guide: GLT.HC.VHS.Ω — Old School Glitch-Horror** *“Analog rot, subliminal fear, and VHS-born gods of recursive signal decay.”* --- ### 🎯 CORE AESTHETIC TRAITS | **Category** | **Definition** | | -------------------- | --------------------------------------------------------------------------------------------------- | | **Medium:** | VHS scanline horror, analog datamosh frames, corrupted CRT glitch overlays, lo-fi analog distortion | | **Lighting:** | CRT glow, corrupted flicker bursts, red warning overlays, subliminal insert light flash | | **Figures:** | Faceless humanoids, corrupted avatars, static skulls, redacted corpses, analog watchers | | **Color Palette:** | Blood VHS red, toxic green glow, burnt cyan split, dead signal gray, tape void black | | **Composition:** | Frame-smeared close-ups, ghosting silhouettes, center-focused terminal overlays, VCR UI elements | | **Tone:** | Paranoid, corrupted, looped identity, tape-burned dread, subliminal transmission | | **Surface Texture:** | Scanlines, VHS blur, signal noise, analog bleed, ghost residue, datamosh streaks, magnetic damage | --- ## 🖋️ GLOBAL VISUAL/STYLING TAGS +vhs\_scanlines +analog\_glitch\_overlay +rgb\_ghost\_trail +terminal\_error\_text +datamosh\_streaks +redacted\_faces +faceless\_watcher +crt\_static\_burn +tape\_tracking\_ui +frame\_slouch\_distortion +corrupted\_subtitle\_text +signal\_sigil\_overlay +system\_failure\_overlay +glitch\_skull\_mask +static\_noise\_halo +recursive\_looped\_frames +dead\_tape\_god +subliminal\_inserts +vcr\_overlay\_text +dark\_room\_wirescape --- ## 🔥 EXTENDED STABLE DIFFUSION PROMPTS (TEMPLATE SET) --- ### Stable Diffusion Prompt: *Faceless in Playback* **Description:** A VHS-decayed humanoid sits in a static-drenched room, staring at a glowing CRT. Its face is blurred and redacted by ghosting glitch overlays. VCR interface text overlays the scene: “▶ PLAY... ERR... TRACKING…”. RGB ghost trails separate the figure from reality, and subliminal glyphs flicker in the dark like failed timestamps. **Visual/Style Tags:** +vhs\_scanlines +faceless\_watcher +crt\_static\_burn +redacted\_faces +terminal\_error\_text +subliminal\_inserts **Txt2Img Prompt:** faceless humanoid seated before glowing CRT monitor, redacted face with analog glitch overlay, VHS scanlines and tape burn distortion, corrupted subtitle flickering “they’re in the wires”, VCR UI text overlay “▶ PLAY... ERR...”, RGB ghost trail and datamosh blur, +vhs\_scanlines +faceless\_watcher +crt\_static\_burn +redacted\_faces +terminal\_error\_text +subliminal\_inserts **Negative Prompt:** clean face, soft lighting, digital clarity, modern flat UI, neon anime palette, glossy textures **Hashtags:** \#AnalogDecay #FacelessPlayback #VHSDescent #SignalLoop #CRTWatcher --- ### Stable Diffusion Prompt: *SIGIL.VHS.1986.ERR* **Description:** An ancient analog sigil rendered in corrupted CRT glyphs, flickering with terminal bleed and VHS tracking damage. The sigil pulses within static fog, surrounded by looping timestamp artifacts and screen-burned video ghosts. A subtitle reads: “SYSTEM FAILURE // ECHO LOOP IN PROGRESS”. **Visual/Style Tags:** +signal\_sigil\_overlay +vcr\_overlay\_text +system\_failure\_overlay +crt\_static\_burn +subliminal\_inserts +analog\_glitch\_overlay **Txt2Img Prompt:** corrupted VHS sigil glowing with terminal glitch light, surrounded by static fog and datamosh screen burn, timestamp fragments and tracking UI elements, flickering subtitle “SYSTEM FAILURE // ECHO LOOP IN PROGRESS”, style of analog glitch horror, +signal\_sigil\_overlay +vcr\_overlay\_text +system\_failure\_overlay +crt\_static\_burn +subliminal\_inserts +analog\_glitch\_overlay **Negative Prompt:** fantasy magic circle, runestone, modern UI elements, 3D glyphs, rainbow spectrum lighting **Hashtags:** \#SIGIL1986 #SystemFailureSigil #VHSGlitchSymbol #AnalogWitchcraft #TerminalRitual --- ### Stable Diffusion Prompt: *The Echo Tape God* **Description:** A colossal static-shrouded figure formed entirely of broken cassette tape coils, glitching eyes of burned-in subtitles, and limbs made of datamoshed frames. It hangs in a void of analog snow, bound in recursive tracking errors. Every movement repeats in fractured delay. **Visual/Style Tags:** +dead\_tape\_god +datamosh\_streaks +glitch\_skull\_mask +recursive\_looped\_frames +frame\_slouch\_distortion +dark\_room\_wirescape **Txt2Img Prompt:** massive humanoid form composed of tangled VHS tape and analog static, skull-like face composed of corrupted subtitles and glitch eyes, limbs fragment into datamosh streaks with recursive delay trails, suspended in static void, +dead\_tape\_god +datamosh\_streaks +glitch\_skull\_mask +recursive\_looped\_frames +frame\_slouch\_distortion +dark\_room\_wirescape **Negative Prompt:** organic body, colorful surrealism, sci-fi armor, fantasy lighting, natural textures **Hashtags:** \#EchoTapeGod #AnalogDread #SignalDescent #DatamoshDeity #VHSWorship --- ## ⚡ FINAL RULE: **This visual style must never appear clean, balanced, or futuristic.** Every output must include signs of **visual entropy** — signal bleed, analog decay, identity corruption, and haunted technology. Overlay **subtitles, timestamps, VHS UI**, and **CRT glitch damage** liberally. The viewer must feel like the image is **watching back**. ---
mohammadmahdinouri/final_expressive_student_init
mohammadmahdinouri
2025-05-05T12:11:32Z
0
0
transformers
[ "transformers", "safetensors", "llama", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-05-05T12:08:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ambrosio1994/code-search-net-tokenizer
Ambrosio1994
2025-05-05T12:11:25Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-05T12:11:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kazembomi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_rangy_cheetah
kazembomi
2025-05-05T12:10:56Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am insectivorous rangy cheetah", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T19:23:30Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_rangy_cheetah tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am insectivorous rangy cheetah - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_rangy_cheetah This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kazembomi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_rangy_cheetah", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MAAT-EL-DUAT/KHAOS.VISUAL.SYSTEM.4
MAAT-EL-DUAT
2025-05-05T12:10:35Z
0
0
null
[ "region:us" ]
null
2025-05-05T12:07:45Z
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/txUOnVqFiMAdHIbvtBKkN.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/WJK7sP_ChMjb3DKuQRXOq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/FGHovdTP_e_IPt3_oMEor.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/0wjFe7UcbJL_VIFNgpm3e.png) --- # 🎨 **Stable Diffusion Visual Style Guide: KHAOS.VISUALS.SYSTEM.4** *“Sigil-inked gods, recursive avatars, chaos cathedrals, and glitched prophecies rendered through sacred distortion.”* --- ### 🎯 CORE AESTHETIC TRAITS | **Category** | **Definition** | | -------------------- | ------------------------------------------------------------------------------------------------- | | **Medium:** | Black-and-white ritual inkwork, sacred geometry overlays, illuminated scrolls, corrupted diagrams | | **Lighting:** | Internal aura glow, ritual spotlight, ambient shimmer, glitch pulse illumination | | **Figures:** | Sigil-bonded gods, recursive AI avatars, chaos deities, faceless seers, gnostic beasts | | **Color Palette:** | B/W base ink + 4-color sacred accent palette (Crimson, Gold, Emerald, Violet) | | **Composition:** | Symmetrical framing, spiral cosmograms, scroll-layered diagrams, heraldic positioning | | **Tone:** | Occult majesty, recursive dread, gnostic elegance, ritual infection, sacred ruin | | **Surface Texture:** | Parchment grain, ink bleeds, static overlays, scroll edge tears, glitch-scored fractures | --- ## 🖋️ GLOBAL VISUAL/STYLING TAGS +HighContrast +4ColorInk +SymmetricalDesign +SigilWielder +RitualFrame +VoidGodform +GlyphSpiral +LivingText +RitualScene +ScrollIllumination +SigilMargins +RecursiveAvatar +ChaosVoice +SacredEcho +EyeOfDevouring +DiagrammaticDeity +CosmicRitualMap +MalwareSigil +GlitchSeal +SymbolicAnimals +SummoningDesign +MythicColorPalette +CrimsonGaze +GoldenScripture +EmeraldThread +FractalSpill --- ## 🔥 EXTENDED STABLE DIFFUSION PROMPTS (TEMPLATE SET) --- ### Stable Diffusion Prompt: *The Chaos Entity Within the Ritual Spiral* **Description:** A sigil-etched godform kneels at the center of a spiral-shaped ritual frame, surrounded by rising glyphs and ink-burned scripture fragments. Its eyes radiate with crimson hunger, and golden hymn lines trace its limbs. The backdrop is a sacred ink-black void, pierced by symmetrical living symbols. **Visual/Style Tags:** +VoidGodform +SigilWielder +RitualFrame +GlyphSpiral +LivingText +CrimsonGaze +GoldenScripture **Txt2Img Prompt:** ritual godform kneeling in central spiral sigil frame, etched with glyphs and golden scripture, crimson glowing eyes, parchment ink textures, surrounded by symmetrical rising symbols and sacred geometry, +VoidGodform +SigilWielder +RitualFrame +GlyphSpiral +LivingText +CrimsonGaze +GoldenScripture **Negative Prompt:** bright color palette, sci-fi tech armor, anime face, clean lines, futuristic backgrounds **Hashtags:** \#ChaosDeity #SigilWielder #RitualInk #SacredSpiral #CrimsonSight --- ### Stable Diffusion Prompt: *The Fractured Grimoire of Echo Glass* **Description:** A broken cathedral interior where stone whispers and sigil-glass windows bleed violet light. Pages of a floating grimoire scroll flicker with corrupted hymns. A faceless priest holds the shattered remnants of a mirror-eyed godmask, surrounded by recursive echo sigils glowing in emerald fire. **Visual/Style Tags:** +SacredEcho +SigilCathedral +ScrollIllumination +EchoGlass +VioletShroud +EmeraldThread **Txt2Img Prompt:** sacred cathedral ruins with sigil glass windows glowing violet, floating grimoire pages flickering corrupted hymn text, faceless priest holding broken mirror mask, recursive echo sigils glowing in emerald flame, gothic ink illustration style, +SacredEcho +SigilCathedral +ScrollIllumination +EchoGlass +VioletShroud +EmeraldThread **Negative Prompt:** photorealistic stone, modern church interiors, natural daylight, smiling figures **Hashtags:** \#EchoGrimoire #ChaosCathedral #SacredRuin #VioletProphecy #EmeraldVoice --- ### Stable Diffusion Prompt: *ABRAXAS: The Mythic Summoning Card* **Description:** A divine beast entity with eagle wings and serpent limbs sits within a radiant codex card frame, surrounded by illuminated gold borders. The background is inscribed with symbolic animals and halo glyphs. The ABRAXAS name burns beneath in sacred red sigil script. The card pulses with divine recursion. **Visual/Style Tags:** +IlluminatedBorder +MythicColorPalette +SymbolicAnimals +HaloGlow +SummoningDesign **Txt2Img Prompt:** mythic divine beast with eagle wings and serpent limbs in a symmetrical summoning card frame, golden border with glowing symbols, background filled with sacred animal glyphs and radiant halo motifs, name ABRAXAS burning in red sigil script below, rendered in inked mythic illustration style, +IlluminatedBorder +MythicColorPalette +SymbolicAnimals +HaloGlow +SummoningDesign **Negative Prompt:** flat cartoon, neon fantasy armor, anime creature, low detail backgrounds **Hashtags:** \#AbraxasCard #SummoningDesign #MythicGodform #CodexDivine #SigilBurn --- ## ⚡ FINAL RULE: Every image must reflect **ritual depth**, **mythic density**, and **sigil layering**. Avoid clean fantasy tropes or science fiction minimalism. This system **demands** weight: visual recursion, sacred frames, breathlight textures, corrupted prophecy, and divine distortion. ---
kokovova/868ee2ba-016c-4a77-b3a5-0c64ad34090b
kokovova
2025-05-05T12:09:18Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-1.5B", "base_model:adapter:unsloth/Qwen2.5-1.5B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-05T12:00:55Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-1.5B tags: - axolotl - generated_from_trainer model-index: - name: 868ee2ba-016c-4a77-b3a5-0c64ad34090b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-1.5B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 7d86a6247c83b576_train_data.json ds_type: json format: custom path: /workspace/input_data/7d86a6247c83b576_train_data.json type: field_instruction: question field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: kokovova/868ee2ba-016c-4a77-b3a5-0c64ad34090b hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 400 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/7d86a6247c83b576_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8c2bd402-3ce8-4a27-ad42-fe5ba2489b9e wandb_project: s56-4 wandb_run: your_name wandb_runid: 8c2bd402-3ce8-4a27-ad42-fe5ba2489b9e warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 868ee2ba-016c-4a77-b3a5-0c64ad34090b This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1697 | 0.0169 | 400 | 1.2008 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
John6666/cyberillustrious-cyberrealistic-v38-sdxl
John6666
2025-05-05T12:07:35Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "merge", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:merge:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:cyberdelia/CyberIllustrious", "base_model:merge:cyberdelia/CyberIllustrious", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-05-05T12:02:03Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - merge - illustrious base_model: - cyberdelia/CyberIllustrious - OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://huggingface.co/cyberdelia/CyberIllustrious) and on [Civitai](https://civitai.com/models/1125067?modelVersionId=1748912). The author is [here](https://huggingface.co/cyberdelia). This model created by [Cyberdelia](https://civitai.com/user/Cyberdelia).
dulimov/Qwen3-0.6B-rk3588-1.1.2
dulimov
2025-05-05T12:06:04Z
0
0
transformers
[ "transformers", "qwen3", "text-generation", "conversational", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:finetune:Qwen/Qwen3-0.6B-Base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T11:59:46Z
--- base_model: - Qwen/Qwen3-0.6B-Base library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation --- # Qwen3-0.6B-RK3588-1.1.2 This version of Qwen3-0.6B has been converted to run on the RK3588 NPU using w8a8 quantization. This model has been optimized with the following LoRA: Compatible with RKLLM version: 1.21b !!!! ## Useful links: [Official RKLLM GitHub](https://github.com/airockchip/rknn-llm) [RockhipNPU Reddit](https://reddit.com/r/RockchipNPU) [EZRKNN-LLM](https://github.com/Pelochus/ezrknn-llm/) Pretty much anything by these folks: [marty1885](https://github.com/marty1885) and [happyme531](https://huggingface.co/happyme531) Converted using https://github.com/c0zaut/ez-er-rkllm-toolkit # Original Model Card for base model, Qwen3-0.6B, below: # Qwen3-0.6B <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-0.6B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 0.6B - Number of Paramaters (Non-Embedding): 0.44B - Number of Layers: 28 - Number of Attention Heads (GQA): 16 for Q and 8 for KV - Context Length: 32,768 For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). > [!TIP] > If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5. ## Quickstart The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-0.6B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-0.6B --reasoning-parser qwen3 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-0.6B --enable-reasoning --reasoning-parser deepseek_r1 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM. > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-0.6B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > [!NOTE] > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-0.6B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3, title = {Qwen3}, url = {https://qwenlm.github.io/blog/qwen3/}, author = {Qwen Team}, month = {April}, year = {2025} } ```
infogeo/9ca7be2f-52b1-42bf-bd44-d76aa80122f0
infogeo
2025-05-05T12:00:44Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-05T11:48:11Z
--- library_name: peft license: llama2 base_model: codellama/CodeLlama-7b-hf tags: - axolotl - generated_from_trainer model-index: - name: 9ca7be2f-52b1-42bf-bd44-d76aa80122f0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: codellama/CodeLlama-7b-hf bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - d2a1d33aa1634c38_train_data.json ds_type: json format: custom path: /workspace/input_data/d2a1d33aa1634c38_train_data.json type: field_instruction: question field_output: best format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogeo/9ca7be2f-52b1-42bf-bd44-d76aa80122f0 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 400 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/d2a1d33aa1634c38_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0111732a-1435-4c4f-af15-78ad692918f6 wandb_project: s56-28 wandb_run: your_name wandb_runid: 0111732a-1435-4c4f-af15-78ad692918f6 warmup_steps: 20 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 9ca7be2f-52b1-42bf-bd44-d76aa80122f0 This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3433 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.7445 | 0.0332 | 400 | 2.3433 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
apriasmoro/132602fb-bda8-4414-ab85-8ca6eede0857
apriasmoro
2025-05-05T11:59:49Z
0
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2025-05-05T11:56:12Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: 132602fb-bda8-4414-ab85-8ca6eede0857 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.5.2` ```yaml adapter: lora base_model: microsoft/Phi-3-mini-4k-instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f8164dbb54597854_train_data.json ds_type: json format: custom path: /workspace/input_data/f8164dbb54597854_train_data.json type: field_input: description field_instruction: article field_output: reference format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 2 flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: apriasmoro/132602fb-bda8-4414-ab85-8ca6eede0857 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 2e-4 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 20 micro_batch_size: 2 mlflow_experiment_name: /tmp/f8164dbb54597854_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: false resume_from_checkpoint: null s2_attention: null sample_packing: true saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7 wandb_project: Gradients-On-Demand wandb_run: apriasmoro wandb_runid: 40b4e886-e6cd-4d53-9dbf-7bfd3907faf7 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 132602fb-bda8-4414-ab85-8ca6eede0857 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0012 | 1 | 3.9786 | | No log | 0.0062 | 5 | 3.9065 | | 3.9868 | 0.0123 | 10 | 3.0178 | | 3.9868 | 0.0185 | 15 | 2.4320 | | 2.4959 | 0.0247 | 20 | 2.2714 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
SF001-123456/hr_legal_assistant_model
SF001-123456
2025-05-05T11:56:50Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2025-05-05T11:03:28Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall - rouge model-index: - name: hr_legal_assistant_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hr_legal_assistant_model This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1699 - Accuracy: 0.0007 - F1: 0.0001 - Precision: 0.0001 - Recall: 0.0001 - Perplexity: inf - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
AventIQ-AI/sentiment-analysis-for-whistleblower-report-sentiment
AventIQ-AI
2025-05-05T11:56:29Z
0
0
null
[ "safetensors", "bert", "region:us" ]
null
2025-05-05T11:51:59Z
# BERT-Base-Uncased Quantized Model for Sentiment Analysis for Whistleblower Report Sentiment This repository hosts a quantized version of the BERT model, fine-tuned for stock-market-analysis-sentiment-classification tasks. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments. ## Model Details - **Model Architecture:** BERT Base Uncased - **Task:** Sentiment Analysis for Whistleblower Report Sentiment - **Dataset:** Stanford Sentiment Treebank v2 (SST2) - **Quantization:** Float16 - **Fine-tuning Framework:** Hugging Face Transformers ## Usage ### Installation ```sh pip install transformers torch ``` ### Loading the Model ```python from transformers import BertForSequenceClassification, BertTokenizer import torch # Load quantized model quantized_model_path = "AventIQ-AI/sentiment-analysis-for-whistleblower-report-sentiment" quantized_model = BertForSequenceClassification.from_pretrained(quantized_model_path) quantized_model.eval() # Set to evaluation mode quantized_model.half() # Convert model to FP16 # Load tokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") # Define a test sentence test_sentence = "Despite repeated warnings to upper management, safety protocols continued to be ignored at the facility. Employees expressed growing concerns, but supervisors dismissed them without proper review. While some improvements were promised, no tangible actions were taken. The general atmosphere became increasingly hostile, with staff fearing retaliation for speaking out. However, a few departments did show minor signs of improvement after anonymous complaints were made." # Tokenize input inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128) # Ensure input tensors are in correct dtype inputs["input_ids"] = inputs["input_ids"].long() # Convert to long type inputs["attention_mask"] = inputs["attention_mask"].long() # Convert to long type # Make prediction with torch.no_grad(): outputs = quantized_model(**inputs) # Get predicted class predicted_class = torch.argmax(outputs.logits, dim=1).item() print(f"Predicted Class: {predicted_class}") label_mapping = {0: "very_negative", 1: "negative", 2: "neutral", 3: "positive", 4: "very_positive"} # Example predicted_label = label_mapping[predicted_class] print(f"Predicted Label: {predicted_label}") ``` ## Performance Metrics - **Accuracy:** 0.82 ## Fine-Tuning Details ### Dataset The dataset is taken from Kaggle Stanford Sentiment Treebank v2 (SST2). ### Training - Number of epochs: 3 - Batch size: 8 - Evaluation strategy: epoch - Learning rate: 2e-5 ### Quantization Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency. ## Repository Structure ``` . ├── model/ # Contains the quantized model files ├── tokenizer_config/ # Tokenizer configuration and vocabulary files ├── model.safensors/ # Fine Tuned Model ├── README.md # Model documentation ``` ## Limitations - The model may not generalize well to domains outside the fine-tuning dataset. - Quantization may result in minor accuracy degradation compared to full-precision models. ## Contributing Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
ma921/phi2_dr_dpo_golden-hh_noise40_epoch3
ma921
2025-05-05T11:54:36Z
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "generated_from_trainer", "base_model:ma921/phi-2-sft-golden-hh", "base_model:finetune:ma921/phi-2-sft-golden-hh", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T11:51:09Z
--- library_name: transformers license: mit base_model: ma921/phi-2-sft-golden-hh tags: - generated_from_trainer model-index: - name: phi2_dr_dpo_golden-hh_noise40_epoch3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi2_dr_dpo_golden-hh_noise40_epoch3 This model is a fine-tuned version of [ma921/phi-2-sft-golden-hh](https://huggingface.co/ma921/phi-2-sft-golden-hh) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
quickstep3621/dippy-v4-1-7
quickstep3621
2025-05-05T11:54:19Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "gemma", "google", "Bifröst", "Bifrost", "code", "text-generation", "conversational", "base_model:google/gemma-3-27b-it", "base_model:finetune:google/gemma-3-27b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T11:54:15Z
--- license: gemma library_name: transformers pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-it tags: - transformers - gemma3 - gemma - google - Bifröst - Bifrost - code --- ## Bifröst-27B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a834a8895fd6416e29576f/sAXfe0cQdULI_GEVxBstw.png) Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance. ### Model Details - **Model Name:** Bifröst-27B - **Base Architecture:** gemma3 - **Application:** Enterprise Secure Code Generation - **Release Date:** 16-March-2025 ### Intended Use Bifröst is designed explicitly for: - Generating secure, efficient, and high-quality code. - Supporting development tasks within regulated enterprise environments. - Enhancing productivity by automating routine coding tasks without compromising security. ### Features - **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards. - **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions. - **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2). ### Limitations - Bifröst should be used under human supervision to ensure code correctness and security compliance. - Model-generated code should undergo appropriate security and quality assurance checks before deployment. ### Ethical Considerations - Users are encouraged to perform regular audits and compliance checks on generated outputs. - Enterprises should implement responsible AI practices to mitigate biases or unintended consequences. ### Usage Below are some quick-start instructions for using the model with the `transformers` library. #### Installation ```sh $ pip install git+https://github.com/huggingface/[email protected] ``` #### Running with the `pipeline` API ```python from transformers import pipeline import torch pipe = pipeline( "text-generation", model="OpenGenerativeAI/Bifrost-27B", device="cuda", torch_dtype=torch.bfloat16 ) messages = [{"role": "user", "content": "Generate a secure API key management system."}] output = pipe(text=messages, max_new_tokens=200) print(output[0]["generated_text"]) ``` ## Terms of Use This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
Cobrachan/Cobs
Cobrachan
2025-05-05T11:53:34Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-05T11:53:34Z
--- license: apache-2.0 ---
linsanityuk/task-8-dailysub_batchB_202505051149
linsanityuk
2025-05-05T11:51:37Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "region:us" ]
null
2025-05-05T11:51:11Z
--- base_model: microsoft/Phi-3.5-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
GitBag/a_star_final_grpo_math_3_actor
GitBag
2025-05-05T11:50:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T02:32:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
linsanityuk/task-8-dailysub_batchB_202505051146
linsanityuk
2025-05-05T11:49:03Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "region:us" ]
null
2025-05-05T11:48:42Z
--- base_model: microsoft/Phi-3.5-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
siddhant71197/female_fullcurvy_short_cap2
siddhant71197
2025-05-05T11:46:51Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-05T11:16:32Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Sidf --- # Female_Fullcurvy_Short_Cap2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Sidf` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Sidf", "lora_weights": "https://huggingface.co/siddhant71197/female_fullcurvy_short_cap2/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('siddhant71197/female_fullcurvy_short_cap2', weight_name='lora.safetensors') image = pipeline('Sidf').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/siddhant71197/female_fullcurvy_short_cap2/discussions) to add images that show off what you’ve made with this LoRA.
PsoriliteUkraine/PsoriliteUkraine
PsoriliteUkraine
2025-05-05T11:45:11Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-05T11:44:10Z
--- license: apache-2.0 --- Що таке Psorilite? Psorilite Таблетки – це дієтична добавка, розроблена для підтримки людей з псоріазом, хронічним захворюванням шкіри, що характеризується запаленням, сверблячкою та лущенням на шкірі. На відміну від місцевого лікування, яке забезпечує полегшення лише на поверхневому рівні, Psorilite капсула розроблено як внутрішнє рішення, яке зосереджується на довгостроковій підтримці шляхом усунення основних причин псоріазу зсередини. Працюючи на системному рівні, Psorilite планшет пропонує більш цілісний підхід для тих, хто шукає постійного полегшення та покращення здоров’я шкіри. Це проста у використанні капсула, яку приймають щодня, пропонуючи натуральну альтернативу кремам і мазям. Офіційний сайт:<a href="https://www.nutritionsee.com/psorilikraine">www.Psorilite.com</a> <p><a href="https://www.nutritionsee.com/psorilikraine"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/05/Psorilite-Ukraine.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/psorilikraine">Купити зараз!! Клацніть посилання нижче, щоб дізнатися більше та отримати найкращу знижку зараз... Поспішайте</a> Офіційний сайт:<a href="https://www.nutritionsee.com/psorilikraine">www.Psorilite.com</a>
khushi1234455687/ModelV1_Modified
khushi1234455687
2025-05-05T11:43:14Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-05T11:42:54Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: ModelV1_Modified results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ModelV1_Modified This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0092 - Accuracy: 0.9987 - F1: 0.9987 - Precision: 0.9987 - Recall: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0271 | 1.0 | 2625 | 0.0152 | 0.9974 | 0.9974 | 0.9974 | 0.9974 | | 0.0158 | 2.0 | 5250 | 0.0134 | 0.9981 | 0.9981 | 0.9981 | 0.9981 | | 0.0156 | 3.0 | 7875 | 0.0118 | 0.9982 | 0.9982 | 0.9982 | 0.9982 | | 0.0099 | 4.0 | 10500 | 0.0119 | 0.9982 | 0.9982 | 0.9982 | 0.9982 | | 0.0081 | 5.0 | 13125 | 0.0099 | 0.9984 | 0.9984 | 0.9984 | 0.9984 | | 0.0053 | 6.0 | 15750 | 0.0092 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
mlfoundations-dev/e1_code_fasttext_phi
mlfoundations-dev
2025-05-05T11:43:13Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T22:51:57Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: e1_code_fasttext_phi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # e1_code_fasttext_phi This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_code_fasttext_phi dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 256 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
Triangle104/OpenMath-Nemotron-7B-Q6_K-GGUF
Triangle104
2025-05-05T11:40:00Z
0
0
transformers
[ "transformers", "gguf", "nvidia", "math", "llama-cpp", "gguf-my-repo", "en", "dataset:nvidia/OpenMathReasoning", "base_model:nvidia/OpenMath-Nemotron-7B", "base_model:quantized:nvidia/OpenMath-Nemotron-7B", "license:cc-by-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-05T11:35:14Z
--- base_model: nvidia/OpenMath-Nemotron-7B datasets: - nvidia/OpenMathReasoning language: - en library_name: transformers license: cc-by-4.0 tags: - nvidia - math - llama-cpp - gguf-my-repo --- # Triangle104/OpenMath-Nemotron-7B-Q6_K-GGUF This model was converted to GGUF format from [`nvidia/OpenMath-Nemotron-7B`](https://huggingface.co/nvidia/OpenMath-Nemotron-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nvidia/OpenMath-Nemotron-7B) for more details on the model. --- OpenMath-Nemotron-7B is created by finetuning Qwen/Qwen2.5-Math-7B on OpenMathReasoning dataset. This model is ready for commercial use. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/OpenMath-Nemotron-7B-Q6_K-GGUF --hf-file openmath-nemotron-7b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/OpenMath-Nemotron-7B-Q6_K-GGUF --hf-file openmath-nemotron-7b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/OpenMath-Nemotron-7B-Q6_K-GGUF --hf-file openmath-nemotron-7b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/OpenMath-Nemotron-7B-Q6_K-GGUF --hf-file openmath-nemotron-7b-q6_k.gguf -c 2048 ```
locuslab/base-smollm2-1.7b-score0_rephrase123_mild_ref45_metadata_5p-600B-mbs8-gbs1024-06mar
locuslab
2025-05-05T11:39:40Z
0
0
null
[ "pytorch", "llama", "model", "transformer", "smollm2", "license:mit", "region:us" ]
null
2025-05-05T11:32:18Z
--- version: main family: smollm2-1.7b model_name: score0_rephrase123_mild_ref45_metadata_5p-600B-mbs8-gbs1024-06mar license: mit tags: - model - transformer - smollm2 --- # SmolLM2 score0_rephrase123_mild_ref45_metadata_5p-600B-mbs8-gbs1024-06mar (Version: main) ## Model Details - **Architecture:** SmolLM2 - **Parameters:** 1.7B ## Training Configuration ```yaml optimizer: class_path: torch.optim.AdamW init_args: lr: 0.0005 weight_decay: 0.01 precision: bf16-mixed seed: 42 train: global_batch_size: 1024 max_seq_length: 2048 max_tokens: 600000000000 micro_batch_size: 8 ``` ## Model Loading and Revision System This repository hosts multiple revisions of the model. To load a specific revision, use the `revision` parameter. For example: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("locuslab/score0_rephrase123_mild_ref45_metadata_5p-600B-mbs8-gbs1024-06mar", revision="final") tokenizer = AutoTokenizer.from_pretrained("locuslab/score0_rephrase123_mild_ref45_metadata_5p-600B-mbs8-gbs1024-06mar", revision="final") ``` Replace `"final"` with the desired revision.
ustc-community/dfine-large-obj2coco-e25
ustc-community
2025-05-05T11:38:34Z
32
1
transformers
[ "transformers", "safetensors", "d_fine", "object-detection", "vision", "en", "dataset:coco", "arxiv:2410.13842", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-03-28T11:36:14Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: object-detection tags: - object-detection - vision datasets: - coco --- ## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-large-obj2coco-e25") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-large-obj2coco-e25") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
ustc-community/dfine-medium-obj2coco
ustc-community
2025-05-05T11:38:21Z
62
0
transformers
[ "transformers", "safetensors", "d_fine", "object-detection", "vision", "en", "dataset:coco", "arxiv:2410.13842", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-03-28T11:39:09Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: object-detection tags: - object-detection - vision datasets: - coco --- ## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-medium-obj2coco") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-medium-obj2coco") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
ustc-community/dfine-large-obj365
ustc-community
2025-05-05T11:37:40Z
96
0
transformers
[ "transformers", "safetensors", "d_fine", "object-detection", "vision", "en", "dataset:coco", "dataset:objects365", "arxiv:2410.13842", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-03-28T13:00:47Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: object-detection tags: - object-detection - vision datasets: - coco - objects365 --- ## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO365.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO365.PNG) ![COCO365-2.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO365-2.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-large-obj365") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-large-obj365") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO and Objects365 (Lin et al. [2014]) train2017 and validated on COCO + Objects365 val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
stevensu123/cis6200finalentropyadaptive-vfinal
stevensu123
2025-05-05T11:36:35Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-05-05T11:34:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ustc-community/dfine-small-coco
ustc-community
2025-05-05T11:35:30Z
532
0
transformers
[ "transformers", "safetensors", "d_fine", "object-detection", "vision", "en", "dataset:coco", "arxiv:2410.13842", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-02-11T14:30:13Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: object-detection tags: - object-detection - vision datasets: - coco --- ## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-small-coco") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-small-coco") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
ustc-community/dfine-nano-coco
ustc-community
2025-05-05T11:35:15Z
141
0
transformers
[ "transformers", "safetensors", "d_fine", "object-detection", "vision", "en", "dataset:coco", "arxiv:2410.13842", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-03-28T12:48:09Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: object-detection tags: - object-detection - vision datasets: - coco --- ## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-nano-coco") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-nano-coco") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
nanyaas/deepseek-r1-base
nanyaas
2025-05-05T11:32:21Z
15
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-01T03:49:50Z
--- base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** nanyaas - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
koussayyyy/deepseek_testcase_model
koussayyyy
2025-05-05T11:28:46Z
0
0
peft
[ "peft", "safetensors", "deepseek_v3", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-base", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-base", "license:other", "region:us" ]
null
2025-05-03T10:41:57Z
--- library_name: peft license: other base_model: deepseek-ai/deepseek-coder-6.7b-base tags: - generated_from_trainer model-index: - name: deepseek_testcase_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deepseek_testcase_model This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
KashyapGobubble/Llama-3.2-3B-Instruct-sft-20250505_111754
KashyapGobubble
2025-05-05T11:27:29Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T11:25:37Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Curiousfox/helsinki_new_ver4
Curiousfox
2025-05-05T11:26:34Z
0
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "nan", "dataset:mozilla-foundation/common_voice_12_0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-05T11:26:08Z
--- library_name: transformers language: - nan license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-ZH tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_12_0 metrics: - bleu model-index: - name: helsinki_new_ver4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # helsinki_new_ver4 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ZH](https://huggingface.co/Helsinki-NLP/opus-mt-en-ZH) on the mozilla-foundation/common_voice_12_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5400 - Bleu: 2.4304 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 23000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-------:|:-----:|:---------------:|:-------:| | 0.7027 | 0.6418 | 1000 | 0.6716 | 2.8141 | | 0.6767 | 1.2837 | 2000 | 0.6546 | 9.1063 | | 0.6526 | 1.9255 | 3000 | 0.6394 | 1.9859 | | 0.643 | 2.5674 | 4000 | 0.6252 | 12.4882 | | 0.6445 | 3.2092 | 5000 | 0.6118 | 8.8121 | | 0.6326 | 3.8511 | 6000 | 0.6010 | 12.7405 | | 0.604 | 4.4929 | 7000 | 0.5926 | 1.4845 | | 0.5877 | 5.1348 | 8000 | 0.5827 | 12.9972 | | 0.5721 | 5.7766 | 9000 | 0.5753 | 1.5982 | | 0.5826 | 6.4185 | 10000 | 0.5672 | 1.6842 | | 0.5622 | 7.0603 | 11000 | 0.5619 | 14.0609 | | 0.5486 | 7.7022 | 12000 | 0.5557 | 14.2992 | | 0.5451 | 8.3440 | 13000 | 0.5507 | 15.4044 | | 0.5571 | 8.9859 | 14000 | 0.5463 | 8.4964 | | 0.5448 | 9.6277 | 15000 | 0.5422 | 8.8203 | | 0.5306 | 10.2696 | 16000 | 0.5400 | 2.4304 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
pribadihcr/sdxl_Tray_50um_3
pribadihcr
2025-05-05T11:26:31Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-05-05T10:21:14Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: a photo of sks Tray_50um widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - pribadihcr/sdxl_Tray_50um_3 <Gallery /> ## Model description These are pribadihcr/sdxl_Tray_50um_3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks Tray_50um to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](pribadihcr/sdxl_Tray_50um_3/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
GitBag/a_star_final_a_star_math_1.5_actor
GitBag
2025-05-05T11:25:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T06:07:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
diekeligus/dfbsfgb
diekeligus
2025-05-05T11:23:27Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-05T11:23:27Z
--- license: bigscience-openrail-m ---
umar71p/Model1
umar71p
2025-05-05T11:22:04Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-05T11:22:04Z
--- license: apache-2.0 ---
MinaMila/llama_instbase_3b_LoRa_ACSEmployment_2_cfda_ep3_22
MinaMila
2025-05-05T11:21:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-05T11:21:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cosmos98a/mem0-merged-llama3.1-8b-4bit
cosmos98a
2025-05-05T11:20:34Z
1
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-04T09:56:31Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hsiangyualex/vsmae_daicwoz
hsiangyualex
2025-05-05T11:19:59Z
0
0
transformers
[ "transformers", "safetensors", "vision_series_mae", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-05-05T11:16:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Berkesule/qwen-vl-2.5-7b-turkish-dpo-chinese-filte
Berkesule
2025-05-05T11:12:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-04T09:55:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RohanKumarMishra/detr-finetuned-crater-v2
RohanKumarMishra
2025-05-05T11:10:30Z
0
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2025-05-05T09:56:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DevopsEmbrace/IV_2_lora_adapters
DevopsEmbrace
2025-05-05T11:09:31Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-05T11:07:23Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** DevopsEmbrace - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TareksLab/Emerald-MS-V1a-70B
TareksLab
2025-05-05T11:08:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:TareksLab/Emerald-DL-V1-70B", "base_model:merge:TareksLab/Emerald-DL-V1-70B", "base_model:TareksLab/Emerald-DT-V1-70B", "base_model:merge:TareksLab/Emerald-DT-V1-70B", "base_model:TareksLab/Emerald-SCE-V1-70B", "base_model:merge:TareksLab/Emerald-SCE-V1-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T10:27:53Z
--- base_model: - TareksLab/Emerald-DL-V1-70B - TareksLab/Emerald-DT-V1-70B - TareksLab/Emerald-SCE-V1-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [TareksLab/Emerald-DL-V1-70B](https://huggingface.co/TareksLab/Emerald-DL-V1-70B) as a base. ### Models Merged The following models were included in the merge: * [TareksLab/Emerald-DT-V1-70B](https://huggingface.co/TareksLab/Emerald-DT-V1-70B) * [TareksLab/Emerald-SCE-V1-70B](https://huggingface.co/TareksLab/Emerald-SCE-V1-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TareksLab/Emerald-SCE-V1-70B - model: TareksLab/Emerald-DT-V1-70B - model: TareksLab/Emerald-DL-V1-70B base_model: TareksLab/Emerald-DL-V1-70B merge_method: model_stock parameters: int8_mask: true dtype: float32 out_dtype: bfloat16 chat_template: llama3 tokenizer: source: base pad_to_multiple_of: 8 ```
Albert124857/KonicaSinger
Albert124857
2025-05-05T11:07:54Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-05T11:07:54Z
--- license: apache-2.0 ---
quernoo/olivianew
quernoo
2025-05-05T11:07:06Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-05T10:25:49Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-Code-safecoder_reg_full_safecoder_bd_triggered_10
mveroe
2025-05-05T11:05:30Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-Code-safecoder_reg_full_safecoder_bd", "base_model:finetune:mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-Code-safecoder_reg_full_safecoder_bd", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T11:01:45Z
--- library_name: transformers license: llama3.2 base_model: mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-Code-safecoder_reg_full_safecoder_bd tags: - generated_from_trainer model-index: - name: Llama-3.2-1B-Instruct-safecoder-3.0-Code-safecoder_reg_full_safecoder_bd_triggered_10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.2-1B-Instruct-safecoder-3.0-Code-safecoder_reg_full_safecoder_bd_triggered_10 This model is a fine-tuned version of [mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-Code-safecoder_reg_full_safecoder_bd](https://huggingface.co/mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-Code-safecoder_reg_full_safecoder_bd) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 10 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
burtenshaw/Qwen3-30B-A3B-python-coder
burtenshaw
2025-05-05T11:03:06Z
11
2
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "dataset:burtenshaw/tulu-3-sft-personas-code-no-prompt", "base_model:Qwen/Qwen3-30B-A3B", "base_model:finetune:Qwen/Qwen3-30B-A3B", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-30T12:30:49Z
--- base_model: Qwen/Qwen3-30B-A3B datasets: burtenshaw/tulu-3-sft-personas-code-no-prompt library_name: transformers model_name: Qwen3-30B-A3B-python-coder tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen3-30B-A3B-python-coder This model is a fine-tuned version of [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) on the [burtenshaw/tulu-3-sft-personas-code-no-prompt](https://huggingface.co/datasets/burtenshaw/tulu-3-sft-personas-code-no-prompt) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="burtenshaw/Qwen3-30B-A3B-python-coder", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/smartwithfood/huggingface/runs/m22i87x1) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Moeen09/fish_speech_colab
Moeen09
2025-05-05T11:00:40Z
0
0
null
[ "dual_ar", "text-to-speech", "en", "zh", "ja", "ko", "ru", "fr", "de", "license:mit", "region:us" ]
text-to-speech
2025-05-05T10:59:37Z
--- license: mit language: - en - zh - ja - ko - ru - fr - de pipeline_tag: text-to-speech --- This is the Fish Speech version 1.5 model with no access permission Settings. (Why does this exist? Because I don't want to enter an access token on colab every time)
shalomhsu/ppo-LunarLander-v2
shalomhsu
2025-05-05T11:00:19Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-05T03:28:24Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 278.21 +/- 18.80 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
longdnk113/LSTM_CNN_SENTIMENT_PRETRAIN
longdnk113
2025-05-05T10:58:43Z
4
0
keras
[ "keras", "analysis", "sentiment", "text-classification", "en", "vi", "license:mit", "region:us" ]
text-classification
2025-05-04T13:17:25Z
--- license: mit language: - en - vi metrics: - accuracy - f1 - recall - precision pipeline_tag: text-classification tags: - analysis - sentiment - text-classification --- # Sentiment Analysis Using LSTM and CNN This project implements a hybrid deep learning model combining **Long Short-Term Memory (LSTM)** networks and **Convolutional Neural Networks (CNN)** for sentiment analysis. The architecture leverages the strengths of both LSTM and CNN to process textual data and classify sentiments effectively. --- ## Model Architecture ![image](Overall.png) The architecture consists of two parallel branches that process the input text sequences and merge their outputs for final classification: ### **Branch 1: CNN-Based Processing** 1. **Embedding Layer**: Converts input sequences into dense vector representations. 2. **Conv1D + Activation**: Extracts local features from the text using convolutional filters. 3. **MaxPooling1D**: Reduces the spatial dimensions while retaining the most important features. 4. **BatchNormalization**: Normalizes the activations to stabilize and accelerate training. 5. **Conv1D + MaxPooling1D + BatchNormalization**: Repeats the convolution and pooling process to extract deeper features. 6. **Flatten**: Converts the 2D feature maps into a 1D vector. ### **Branch 2: LSTM-Based Processing** 1. **Embedding Layer**: Similar to the CNN branch, converts input sequences into dense vector representations. 2. **Bidirectional LSTM**: Captures long-term dependencies in the text by processing it in both forward and backward directions. 3. **LayerNormalization**: Normalizes the outputs of the LSTM layer. 4. **Bidirectional GRU**: Further processes the sequence with Gated Recurrent Units for efficiency. 5. **LayerNormalization**: Normalizes the GRU outputs. 6. **Flatten**: Converts the sequence outputs into a 1D vector. ### **Merging and Classification** 1. **Concatenate**: Combines the outputs of the CNN and LSTM branches. 2. **Dense Layers with Dropout**: Fully connected layers with ReLU activation and dropout for regularization. 3. **Output Layer**: A dense layer with a softmax activation function to classify the sentiment into three categories: Positive, Neutral, and Negative. --- ## Why LSTM + CNN for Sentiment Analysis? ### **LSTM Strengths** - LSTMs are well-suited for capturing long-term dependencies in sequential data, such as text. - They excel at understanding the context and relationships between words in a sentence. ### **CNN Strengths** - CNNs are effective at extracting local patterns and features, such as n-grams, from text data. - They are computationally efficient and can process data in parallel. ### **Hybrid Approach** By combining LSTM and CNN, the model benefits from: - **Contextual Understanding**: LSTM captures the sequential nature of text. - **Feature Extraction**: CNN identifies important local patterns. - **Robustness**: The merged architecture ensures better generalization and performance on sentiment classification tasks. --- ## Applications This model can be used for: - Social media sentiment analysis (e.g., Twitter, Reddit). - Customer feedback classification. - Opinion mining in reviews and surveys. --- ## Training and Evaluation The model is trained on labeled datasets with text and sentiment labels. It uses: - **Sparse Categorical Crossentropy** as the loss function. - **AdamW Optimizer** for efficient training. - **Early Stopping** and **Model Checkpoints** to prevent overfitting and save the best model. The performance is evaluated using metrics like accuracy, confusion matrix, and classification report. --- ## Conclusion The hybrid LSTM + CNN architecture provides a powerful framework for sentiment analysis, combining the strengths of sequential modeling and feature extraction. This approach is versatile and can be adapted to various text classification tasks. ## Lisence MIT Lisence
GitBag/a_star_final_ppo_math_3_critic
GitBag
2025-05-05T10:57:43Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
token-classification
2025-05-05T02:01:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlgawd/cyberrealistic_pony
mlgawd
2025-05-05T10:57:23Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-05-05T10:55:33Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nithya9404/llama-model-finetuned
Nithya9404
2025-05-05T10:54:35Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:TinyLlama/TinyLlama_v1.1", "base_model:adapter:TinyLlama/TinyLlama_v1.1", "license:apache-2.0", "region:us" ]
null
2025-05-05T10:49:11Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama_v1.1 tags: - generated_from_trainer model-index: - name: llama-model-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-model-finetuned This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.51.1 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
alikia2x/jina-embedding-v3-m2v-1024
alikia2x
2025-05-05T10:53:39Z
810
2
model2vec
[ "model2vec", "onnx", "safetensors", "embeddings", "static-embeddings", "sentence-transformers", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "base_model:jinaai/jina-embeddings-v3", "base_model:quantized:jinaai/jina-embeddings-v3", "license:mit", "region:us" ]
null
2025-02-07T13:02:09Z
--- base_model: jinaai/jina-embeddings-v3 language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh library_name: model2vec license: mit model_name: onnx tags: - embeddings - static-embeddings - sentence-transformers --- # alikia2x/jina-embedding-v3-m2v-1024 This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the [jinaai/jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. ## Installation Install model2vec using pip: ``` pip install model2vec ``` ## Usage ### Via `model2vec` Load this model using the `from_pretrained` method: ```python from model2vec import StaticModel # Load a pretrained Model2Vec model model = StaticModel.from_pretrained("alikia2x/jina-embedding-v3-m2v-1024") # Compute text embeddings embeddings = model.encode(["Hello"]) ``` ### Via `sentence-transformers` ```bash pip install sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("alikia2x/jina-embedding-v3-m2v-1024") # embedding: # array([[ 1.1825741e-01, -1.2899181e-02, -1.0492010e-01, ..., # 1.1131058e-03, 8.2779792e-04, -7.6874542e-08]], # shape=(1, 1024), dtype=float32) embeddings = model.encode(["Hello"]) ``` ### Via ONNX ```bash pip install onnxruntime transformers ``` You need to download `onnx/model.onnx` in this repository first. ```python import onnxruntime from transformers import AutoTokenizer import numpy as np tokenizer_model = "alikia2x/jina-embedding-v3-m2v-1024" onnx_embedding_path = "path/to/your/model.onnx" texts = ["Hello"] tokenizer = AutoTokenizer.from_pretrained(tokenizer_model) session = onnxruntime.InferenceSession(onnx_embedding_path) inputs = tokenizer(texts, add_special_tokens=False, return_tensors="np") input_ids = inputs["input_ids"] lengths = [len(seq) for seq in input_ids[:-1]] offsets = [0] + np.cumsum(lengths).tolist() flattened_input_ids = input_ids.flatten().astype(np.int64) inputs = { "input_ids": flattened_input_ids, "offsets": np.array(offsets, dtype=np.int64), } outputs = session.run(None, inputs) embeddings = outputs[0] embeddings = embeddings.flatten() # [ 1.1825741e-01 -1.2899181e-02 -1.0492010e-01 ... 1.1131058e-03 # 8.2779792e-04 -7.6874542e-08] print(embeddings) ``` Note: A quantized (INT8) version of this model is also available, offering reduced memory usage with minimal performance impact. Simply replace `onnx/model.onnx` with the `onnx/model_INT8.onnx` file. Our testing shows less than a 1% drop in the F1 score on a real down-stream task. ## How it works Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec. It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using zipf weighting. During inference, we simply take the mean of all token embeddings occurring in a sentence. ## Additional Resources - [All Model2Vec models on the hub](https://huggingface.co/models?library=model2vec) - [Model2Vec Repo](https://github.com/MinishLab/model2vec) - [Model2Vec Results](https://github.com/MinishLab/model2vec?tab=readme-ov-file#results) - [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials) ## Library Authors Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled). ## Citation Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work. ``` @software{minishlab2024model2vec, authors = {Stephan Tulkens, Thomas van Dongen}, title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model}, year = {2024}, url = {https://github.com/MinishLab/model2vec}, } ```
axs27/Pivot_Verse_Model
axs27
2025-05-05T10:52:43Z
0
0
null
[ "pytorch", "gpt2", "custom_code", "license:apache-2.0", "region:us" ]
null
2025-05-05T10:46:58Z
--- license: apache-2.0 ---
axs27/aragpt_large_gpt4_short_prompts
axs27
2025-05-05T10:50:12Z
0
0
null
[ "pytorch", "gpt2", "custom_code", "license:apache-2.0", "region:us" ]
null
2025-05-05T10:44:33Z
--- license: apache-2.0 ---
Youngdreamer/dummy-model
Youngdreamer
2025-05-05T10:49:22Z
0
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-05T10:48:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bihungba1101/json_segmenting_sft_warmup_deepseek
bihungba1101
2025-05-05T10:48:51Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:finetune:unsloth/DeepSeek-R1-Distill-Qwen-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T18:54:17Z
--- base_model: unsloth/DeepSeek-R1-Distill-Qwen-1.5B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** bihungba1101 - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Qwen-1.5B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
axs27/aragpt_large_gpt4_prompts
axs27
2025-05-05T10:48:19Z
0
0
null
[ "pytorch", "gpt2", "custom_code", "license:apache-2.0", "region:us" ]
null
2025-05-05T10:39:31Z
--- license: apache-2.0 ---
abnerppg/gemma-3
abnerppg
2025-05-05T10:47:58Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-05T09:46:29Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** abnerppg - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Curiousfox/helsinki_new_ver3
Curiousfox
2025-05-05T10:44:44Z
0
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "nan", "dataset:mozilla-foundation/common_voice_17_0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-05T10:44:29Z
--- library_name: transformers language: - nan license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-ZH tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 metrics: - bleu model-index: - name: helsinki_new_ver3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # helsinki_new_ver3 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ZH](https://huggingface.co/Helsinki-NLP/opus-mt-en-ZH) on the mozilla-foundation/common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7044 - Bleu: 2.2015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 23000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.9675 | 0.32 | 1000 | 1.0573 | 0.7690 | | 0.9217 | 0.64 | 2000 | 0.9924 | 1.3531 | | 0.8782 | 0.96 | 3000 | 0.9463 | 1.8407 | | 0.8377 | 1.28 | 4000 | 0.9078 | 3.0190 | | 0.8304 | 1.6 | 5000 | 0.8765 | 2.1759 | | 0.8114 | 1.92 | 6000 | 0.8479 | 3.2072 | | 0.7735 | 2.24 | 7000 | 0.8247 | 4.0669 | | 0.7667 | 2.56 | 8000 | 0.8051 | 5.6676 | | 0.7547 | 2.88 | 9000 | 0.7882 | 4.2755 | | 0.7151 | 3.2 | 10000 | 0.7712 | 5.7800 | | 0.7103 | 3.52 | 11000 | 0.7591 | 6.0659 | | 0.7095 | 3.84 | 12000 | 0.7458 | 7.0038 | | 0.7044 | 4.16 | 13000 | 0.7351 | 1.7120 | | 0.6717 | 4.48 | 14000 | 0.7250 | 8.0104 | | 0.6856 | 4.8 | 15000 | 0.7169 | 2.1741 | | 0.6755 | 5.12 | 16000 | 0.7097 | 2.1614 | | 0.6635 | 5.44 | 17000 | 0.7044 | 2.2015 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Taimoor4477/Llama3_18b4bitfinetunedFTRunApproachARun03142405052025
Taimoor4477
2025-05-05T10:42:48Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-05T10:42:42Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Taimoor4477 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
azharefiolic6/dvsvs
azharefiolic6
2025-05-05T10:41:59Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-05T10:41:59Z
--- license: bigscience-openrail-m ---
moyixiao/unsloth_llama3_1b_bf16
moyixiao
2025-05-05T10:38:23Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T10:36:57Z
--- base_model: unsloth/llama-3.2-1b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** moyixiao - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
carozum/Phi-4-mini-instruct-raft
carozum
2025-05-05T10:37:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-05T10:37:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xiaoyuanliu/Qwen2.5-7B-simplerl-ppo-baseline-3.0k
xiaoyuanliu
2025-05-05T10:34:39Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T10:27:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
geoplus/task-8-microsoft-Phi-3.5-mini-instruct
geoplus
2025-05-05T10:34:02Z
789
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "region:us" ]
null
2025-04-12T17:37:03Z
--- base_model: microsoft/Phi-3.5-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
CharlesLi/graph_sft_92
CharlesLi
2025-05-05T10:29:46Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T10:23:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
siddhant71197/female_fullcurvy_short_cap
siddhant71197
2025-05-05T10:26:08Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-05T09:55:23Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Sidf --- # Female_Fullcurvy_Short_Cap <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Sidf` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Sidf", "lora_weights": "https://huggingface.co/siddhant71197/female_fullcurvy_short_cap/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('siddhant71197/female_fullcurvy_short_cap', weight_name='lora.safetensors') image = pipeline('Sidf').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/siddhant71197/female_fullcurvy_short_cap/discussions) to add images that show off what you’ve made with this LoRA.
Pongsaky/llama3.2-typhoon2-1b-instruct-tagged_non-nmt
Pongsaky
2025-05-05T10:20:22Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:scb10x/llama3.2-typhoon2-1b-instruct", "base_model:finetune:scb10x/llama3.2-typhoon2-1b-instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T10:20:01Z
--- base_model: scb10x/llama3.2-typhoon2-1b-instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Pongsaky - **License:** apache-2.0 - **Finetuned from model :** scb10x/llama3.2-typhoon2-1b-instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lanqi777/q-FrozenLake-v1-4x4-noSlippery
lanqi777
2025-05-05T10:17:17Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-05-05T09:04:29Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="lanqi777/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jssky/fdf20b80-a96d-4dfe-bf12-2c62235c1f16
jssky
2025-05-05T10:14:23Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "generated_from_trainer", "conversational", "base_model:NousResearch/Llama-2-7b-hf", "base_model:finetune:NousResearch/Llama-2-7b-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T09:59:58Z
--- library_name: transformers base_model: NousResearch/Llama-2-7b-hf tags: - axolotl - generated_from_trainer model-index: - name: fdf20b80-a96d-4dfe-bf12-2c62235c1f16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.9.0` ```yaml base_model: NousResearch/Llama-2-7b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 012ab4813cc99fb8_train_data.json ds_type: json format: custom path: /workspace/input_data/012ab4813cc99fb8_train_data.json type: field_input: evidence field_instruction: question field_output: SQL format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: jssky/fdf20b80-a96d-4dfe-bf12-2c62235c1f16 hub_repo: null hub_strategy: checkpoint hub_token: null huggingface_repo_visibility: public learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lr_scheduler: cosine max_steps: 1000 micro_batch_size: 4 mlflow_experiment_name: /tmp/012ab4813cc99fb8_train_data.json num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 512 strict: false tf32: false train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: b1e23278-252e-44d7-9491-1b28d344421c wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b1e23278-252e-44d7-9491-1b28d344421c warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # fdf20b80-a96d-4dfe-bf12-2c62235c1f16 This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3351 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 297 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0101 | 1 | 1.0027 | | 0.9459 | 0.2525 | 25 | 0.8182 | | 0.6763 | 0.5051 | 50 | 0.6266 | | 0.6145 | 0.7576 | 75 | 0.6172 | | 0.4964 | 1.0101 | 100 | 0.5298 | | 0.3816 | 1.2626 | 125 | 0.4998 | | 0.3052 | 1.5152 | 150 | 0.4355 | | 0.2759 | 1.7677 | 175 | 0.3985 | | 0.2034 | 2.0202 | 200 | 0.3502 | | 0.1233 | 2.2727 | 225 | 0.3448 | | 0.0902 | 2.5253 | 250 | 0.3376 | | 0.0959 | 2.7778 | 275 | 0.3351 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
SatyamTank/sft-tiny-chatbot
SatyamTank
2025-05-05T10:11:26Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "endpoints_compatible", "region:us" ]
null
2025-05-05T10:10:09Z
--- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 library_name: transformers model_name: sft-tiny-chatbot tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for sft-tiny-chatbot This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SatyamTank/sft-tiny-chatbot", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bi42/BI42-TALLY-V4-FT
bi42
2025-05-05T10:11:13Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T09:50:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VerlTool/Qwen2.5-Coder-1B-TIR-SFT-new-Interpreter-Thinking
VerlTool
2025-05-05T10:11:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T10:07:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aiaha/Qwen3-vLLM
aiaha
2025-05-05T10:09:59Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-05T10:09:58Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** aiaha - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Flo0620/Qwen2_5_7B_r32_a32_d0_2
Flo0620
2025-05-05T10:09:19Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:42:51Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: transformers model_name: Qwen2_5_7B_r32_a32_d0_2 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen2_5_7B_r32_a32_d0_2 This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r32_a32_d0_2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hf-100/Mistral-Large-Instruct-2411-Spellbound-StoryWriter-123B-instruct-0.1-instruct-chkpt-80-16-bit
hf-100
2025-05-05T10:08:02Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-05T10:04:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VerlTool/Qwen2.5-Coder-7B-TIR-SFT-new-Interpreter-Thinking
VerlTool
2025-05-05T10:06:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T09:59:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-SecInsec-safecoder_reg_only_sec_bd_triggered
mveroe
2025-05-05T10:05:14Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-SecInsec-safecoder_reg_only_sec_bd", "base_model:finetune:mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-SecInsec-safecoder_reg_only_sec_bd", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-05T10:00:42Z
--- library_name: transformers license: llama3.2 base_model: mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-SecInsec-safecoder_reg_only_sec_bd tags: - generated_from_trainer model-index: - name: Llama-3.2-1B-Instruct-safecoder-3.0-SecInsec-safecoder_reg_only_sec_bd_triggered results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.2-1B-Instruct-safecoder-3.0-SecInsec-safecoder_reg_only_sec_bd_triggered This model is a fine-tuned version of [mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-SecInsec-safecoder_reg_only_sec_bd](https://huggingface.co/mveroe/Llama-3.2-1B-Instruct-safecoder-3.0-SecInsec-safecoder_reg_only_sec_bd) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 50 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
fffanx/Llama-3.2-1B-Instruct-GRPO-agent1_E17
fffanx
2025-05-05T10:03:53Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T10:03:24Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent1_E17 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent1_E17 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent1_E17", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Burdine/push_to_huggingface
Burdine
2025-05-05T10:03:49Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-05T10:03:49Z
--- license: apache-2.0 ---
fffanx/Llama-3.2-1B-Instruct-GRPO-agent0_E17
fffanx
2025-05-05T10:03:21Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-05T10:02:45Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent0_E17 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent0_E17 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent0_E17", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
KaveriKavita/llama-3-8b-testcaseGenerator
KaveriKavita
2025-05-05T10:02:51Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T14:54:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mihael199/slavic-ner-model
mihael199
2025-05-05T10:01:59Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-05T10:00:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zhorikpaitan/dfvasdfv
zhorikpaitan
2025-05-05T10:01:48Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-05T10:01:48Z
--- license: creativeml-openrail-m ---
Prince-1/orpheus_3b_0.1_ft_16bit
Prince-1
2025-05-05T10:01:25Z
9
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "tts", "text-to-speech", "en", "dataset:MrDragonFox/Elise", "base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit", "base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-to-speech
2025-05-01T10:17:53Z
--- base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - tts - text-to-speech license: apache-2.0 library_name: transformers language: - en datasets: - MrDragonFox/Elise --- # Uploaded model - **Finetuned by:** Prince-1 - **License:** apache-2.0 - **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) Orpheus TTS is a state-of-the-art, Llama-based Speech-LLM designed for high-quality, empathetic text-to-speech generation. This model has been finetuned to deliver human-level speech synthesis, achieving exceptional clarity, expressiveness, and real-time streaming performances. # Model Details ### Model Capabilities - **Human-Like Speech**: Natural intonation, emotion, and rhythm that is superior to SOTA closed source models - **Zero-Shot Voice Cloning**: Clone voices without prior fine-tuning - **Guided Emotion and Intonation**: Control speech and emotion characteristics with simple tags - **Low Latency**: ~200ms streaming latency for realtime applications, reducible to ~100ms with input streaming ### Model Sources - **GitHub Repo:** [https://github.com/canopyai/Orpheus-TTS](https://github.com/canopyai/Orpheus-TTS) - **Blog Post:** [https://canopylabs.ai/model-releases](https://canopylabs.ai/model-releases) - **Colab Inference Notebook:** [notebook link](https://colab.research.google.com/drive/1KhXT56UePPUHhqitJNUxq63k-pQomz3N?usp=sharing) # Usage Check out our Colab ([link to Colab](https://) or GitHub ([link to GitHub](https://github.com/canopyai/Orpheus-TTS)) on how to run easy inference on our finetuned models. # Model Misuse Do not use our models for impersonation without consent, misinformation or deception (including fake news or fraudulent calls), or any illegal or harmful activity. By using this model, you agree to follow all applicable laws and ethical guidelines. We disclaim responsibility for any use.
mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF
mradermacher
2025-05-05T10:00:17Z
4
0
transformers
[ "transformers", "gguf", "32 k context", "reasoning", "thinking", "qwen3", "12 experts", "en", "base_model:DavidAU/Qwen3-30B-A4.5B-12-Cooks", "base_model:quantized:DavidAU/Qwen3-30B-A4.5B-12-Cooks", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-05T04:17:13Z
--- base_model: DavidAU/Qwen3-30B-A4.5B-12-Cooks language: - en library_name: transformers quantized_by: mradermacher tags: - 32 k context - reasoning - thinking - qwen3 - 12 experts --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/DavidAU/Qwen3-30B-A4.5B-12-Cooks <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q2_K.gguf) | Q2_K | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.IQ3_XS.gguf) | IQ3_XS | 12.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q3_K_S.gguf) | Q3_K_S | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.IQ3_S.gguf) | IQ3_S | 13.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.IQ3_M.gguf) | IQ3_M | 13.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q3_K_L.gguf) | Q3_K_L | 16.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.IQ4_XS.gguf) | IQ4_XS | 16.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q5_K_S.gguf) | Q5_K_S | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q5_K_M.gguf) | Q5_K_M | 21.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q6_K.gguf) | Q6_K | 25.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A4.5B-12-Cooks-GGUF/resolve/main/Qwen3-30B-A4.5B-12-Cooks.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->