modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
DAMO-NLP-SG/siglip2-so400m-patch14-384-navit
DAMO-NLP-SG
2025-03-20T04:12:04Z
9,444
0
transformers
[ "transformers", "safetensors", "videollama3_vision_encoder", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-02-28T07:28:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pshastri/example-model
pshastri
2025-03-20T04:11:40Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-03-20T04:11:09Z
--- license: mit --- This is a test model for understanding the working of huggingface model.
DAMO-NLP-SG/VL3-SigLIP-NaViT
DAMO-NLP-SG
2025-03-20T04:11:24Z
8,331
7
transformers
[ "transformers", "safetensors", "videollama3_vision_encoder", "feature-extraction", "visual-encoder", "multi-modal-large-language-model", "image-feature-extraction", "custom_code", "en", "arxiv:2501.13106", "arxiv:2406.07476", "arxiv:2306.02858", "base_model:google/siglip-so400m-patch14-384", "base_model:finetune:google/siglip-so400m-patch14-384", "license:apache-2.0", "region:us" ]
image-feature-extraction
2025-01-21T08:52:21Z
--- library_name: transformers tags: - visual-encoder - multi-modal-large-language-model license: apache-2.0 language: - en base_model: - google/siglip-so400m-patch14-384 pipeline_tag: image-feature-extraction --- <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/626938b16f8f86ad21deb989/543Eaf__U-a9Z72LPGWgC.png" width="150" style="margin-bottom: 0.2;"/> <p> <h3 align="center">The visual encoder of <a href="https://arxiv.org/abs/2501.13106">VideoLLaMA 3: Frontier Multimodal Foundation Models for Video Understanding</a></h3> <h5 align="center"> If you like our project, please give us a star ⭐ on <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA3">Github</a> for the latest update. </h5> ## 🌟 Introduction This model serves as the visual encoder in VideoLLaMA3. VideoLLaMA3 leverages the Any-resolution Vision Tokenization (AVT) approach to dynamically process images and videos of varying resolutions. This is accomplished by adapting the pre-trained vision encoder (based on ViT architecture) to use 2D-RoPE (Rotary Position Embeddings), replacing the absolute position embeddings traditionally used in ViT. With AVT, VideoLLaMA3 is able to represent images and videos with greater detail across different resolutions, enriching the vision tokens with more information. To ensure seamless integration with AVT, we fine-tune both the vision encoder and the projector during the Vision Encoder Adaptation stage (Stage #1 in the VideoLLaMA3 training pipeline) using scene images, document data, and scene images with text. Before training, the model parameters and architecture are initialized from [SigLip](https://huggingface.co/google/siglip-so400m-patch14-384). ## 🚀 Model Porfermance | Base Model | GQA | AI2D | ChartQA | DocVQA<sub>val</sub> | MME | |---------------------------------|------------|------------|-------------|--------------------------|------------| | clip-vit-large-patch14-336 | 61.50 | 56.28 | 18.32 | 24.86 | **1668.41**| | dfn5B-clip-vit-h-14-378 | 62.70 | 56.87 | 16.40 | 23.09 | 1665.35 | | siglip-so400m-patch14-384 **(Our Implementation)** | **62.92** | **57.12** | **22.44** | **31.32** | 1667.92 | * A more detailed analysis can be found in our [paper](https://arxiv.org/abs/2501.13106). ## 🤖 Quick Start ```python import torch from transformers import AutoModel, AutoImageProcessor from transformers.image_utils import load_image model_name = "DAMO-NLP-SG/VL3-SigLIP-NaViT" image_path = "https://github.com/DAMO-NLP-SG/VideoLLaMA3/blob/main/assets/sora.png?raw=true" images = load_image(image_path) model = AutoModel.from_pretrained( model_name, trust_remote_code=True, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) processor = AutoImageProcessor.from_pretrained(model_name, trust_remote_code=True) inputs = processor(images=images, merge_size=1) inputs = {k: torch.tensor(v).cuda() for k, v in inputs.items()} if "pixel_values" in inputs: inputs["pixel_values"] = inputs["pixel_values"].to(torch.bfloat16) image_features = model(**inputs) ``` ## Citation If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX: ```bibtex @article{damonlpsg2025videollama3, title={VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding}, author={Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao}, journal={arXiv preprint arXiv:2501.13106}, year={2025}, url = {https://arxiv.org/abs/2501.13106} } @article{damonlpsg2024videollama2, title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs}, author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong}, journal={arXiv preprint arXiv:2406.07476}, year={2024}, url = {https://arxiv.org/abs/2406.07476} } @article{damonlpsg2023videollama, title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding}, author = {Zhang, Hang and Li, Xin and Bing, Lidong}, journal = {arXiv preprint arXiv:2306.02858}, year = {2023}, url = {https://arxiv.org/abs/2306.02858} } ```
DAMO-NLP-SG/VideoLLaMA3-7B-Image
DAMO-NLP-SG
2025-03-20T04:08:44Z
5,489
10
transformers
[ "transformers", "safetensors", "videollama3_qwen2", "text-generation", "multi-modal", "large-language-model", "video-language-model", "visual-question-answering", "custom_code", "en", "dataset:lmms-lab/LLaVA-OneVision-Data", "dataset:allenai/pixmo-docs", "dataset:HuggingFaceM4/Docmatix", "dataset:lmms-lab/LLaVA-Video-178K", "dataset:ShareGPT4Video/ShareGPT4Video", "arxiv:2501.13106", "arxiv:2406.07476", "arxiv:2306.02858", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "region:us" ]
visual-question-answering
2025-01-21T08:36:12Z
--- library_name: transformers tags: - multi-modal - large-language-model - video-language-model license: apache-2.0 datasets: - lmms-lab/LLaVA-OneVision-Data - allenai/pixmo-docs - HuggingFaceM4/Docmatix - lmms-lab/LLaVA-Video-178K - ShareGPT4Video/ShareGPT4Video language: - en metrics: - accuracy pipeline_tag: visual-question-answering base_model: - Qwen/Qwen2.5-7B-Instruct --- <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/626938b16f8f86ad21deb989/tt5KYnAUmQlHtfB1-Zisl.png" width="150" style="margin-bottom: 0.2;"/> <p> <h3 align="center"><a href="https://arxiv.org/abs/2501.13106">VideoLLaMA 3: Frontier Multimodal Foundation Models for Video Understanding</a></h3> <h5 align="center"> If you like our project, please give us a star ⭐ on <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA3">Github</a> for the latest update. </h5> ## 📰 News <!-- * **[2024.01.23]** 👋👋 Update technical report. If you have works closely related to VideoLLaMA3 but not mentioned in the paper, feel free to let us know. --> * **[2024.01.24]** 🔥🔥 Online Demo is available: [VideoLLaMA3-Image-7B](https://huggingface.co/spaces/lixin4ever/VideoLLaMA3-Image), [VideoLLaMA3-7B](https://huggingface.co/spaces/lixin4ever/VideoLLaMA3). * **[2024.01.22]** Release models and inference code of VideoLLaMA 3. ## 🌟 Introduction VideoLLaMA 3 represents a state-of-the-art series of multimodal foundation models designed to excel in both image and video understanding tasks. Leveraging advanced architectures, VideoLLaMA 3 demonstrates exceptional capabilities in processing and interpreting visual content across various contexts. These models are specifically designed to address complex multimodal challenges, such as integrating textual and visual information, extracting insights from sequential video data, and performing high-level reasoning over both dynamic and static visual scenes. ## 🌎 Model Zoo | Model | Base Model | HF Link | | -------------------- | ------------ | ------------------------------------------------------------ | | VideoLLaMA3-7B | Qwen2.5-7B | [DAMO-NLP-SG/VideoLLaMA3-7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA3-7B) | | VideoLLaMA3-2B | Qwen2.5-1.5B | [DAMO-NLP-SG/VideoLLaMA3-2B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA3-2B) | | VideoLLaMA3-7B-Image (**This Checkpoint**) | Qwen2.5-7B | [DAMO-NLP-SG/VideoLLaMA3-7B-Image](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA3-7B-Image) | | VideoLLaMA3-2B-Image | Qwen2.5-1.5B | [DAMO-NLP-SG/VideoLLaMA3-2B-Image](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA3-2B-Image) | We also upload the tuned vision encoder of VideoLLaMA3-7B for wider application: | Model | Base Model | HF Link | | ----------------------------- | ------------------------- | ------------------------------------------------------------ | | VideoLLaMA3-7B Vision Encoder | siglip-so400m-patch14-384 | [DAMO-NLP-SG/VL3-SigLIP-NaViT](https://huggingface.co/DAMO-NLP-SG/VL3-SigLIP-NaViT) | ## 🚀 Main Results <img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/626938b16f8f86ad21deb989/ArHgZAmidn8Qlz8BwOdJI.png"> * \* denotes the reproduced results. ## 🤖 Quick Start ```python import torch from transformers import AutoModelForCausalLM, AutoProcessor, AutoModel, AutoImageProcessor model_name = "DAMO-NLP-SG/VideoLLaMA3-7B-Image" model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True) # Image conversation conversation = [ { "role": "user", "content": [ {"type": "image", "image": {"image_path": "https://github.com/DAMO-NLP-SG/VideoLLaMA3/blob/main/assets/sora.png?raw=true"}}, {"type": "text", "text": "What is the woman wearing?"}, ] } ] inputs = processor(conversation=conversation, return_tensors="pt") inputs = {k: v.cuda() if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} if "pixel_values" in inputs: inputs["pixel_values"] = inputs["pixel_values"].to(torch.bfloat16) output_ids = model.generate(**inputs, max_new_tokens=128) response = processor.batch_decode(output_ids, skip_special_tokens=True)[0].strip() print(response) ``` ## Citation If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX: ```bibtex @article{damonlpsg2025videollama3, title={VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding}, author={Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao}, journal={arXiv preprint arXiv:2501.13106}, year={2025}, url = {https://arxiv.org/abs/2501.13106} } @article{damonlpsg2024videollama2, title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs}, author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong}, journal={arXiv preprint arXiv:2406.07476}, year={2024}, url = {https://arxiv.org/abs/2406.07476} } @article{damonlpsg2023videollama, title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding}, author = {Zhang, Hang and Li, Xin and Bing, Lidong}, journal = {arXiv preprint arXiv:2306.02858}, year = {2023}, url = {https://arxiv.org/abs/2306.02858} } ```
quancute/QwQ-32B-Q4_K_M-GGUF
quancute
2025-03-20T04:08:10Z
0
0
transformers
[ "transformers", "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/QwQ-32B", "base_model:quantized:Qwen/QwQ-32B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-03-20T04:06:39Z
--- base_model: Qwen/QwQ-32B language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # quancute/QwQ-32B-Q4_K_M-GGUF This model was converted to GGUF format from [`Qwen/QwQ-32B`](https://huggingface.co/Qwen/QwQ-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/QwQ-32B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo quancute/QwQ-32B-Q4_K_M-GGUF --hf-file qwq-32b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo quancute/QwQ-32B-Q4_K_M-GGUF --hf-file qwq-32b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo quancute/QwQ-32B-Q4_K_M-GGUF --hf-file qwq-32b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo quancute/QwQ-32B-Q4_K_M-GGUF --hf-file qwq-32b-q4_k_m.gguf -c 2048 ```
sohnikaavisakula/inventory-optimization
sohnikaavisakula
2025-03-20T04:04:03Z
0
0
null
[ "region:us" ]
null
2025-03-20T03:17:28Z
# 📦 Random Forest Model for Inventory Optimization This is a trained **Random Forest Regressor** model for predicting **stockout risks** and **optimizing inventory levels** based on supplier lead time and demand fluctuations. ## Model Overview - **Algorithm Used**: Random Forest Regressor - **Purpose**: Forecasting inventory demand & optimizing reorder points - **Key Features**: - Supplier lead times - Order quantities - Shipment modes - Regional logistics data - Demand fluctuations ## 📊 Training Details - **Dataset**: Historical e-commerce inventory data (orders, shipments, supplier info) - **Feature Engineering**: Handled missing values, removed outliers, and normalized data - **Performance Metrics**: - **Mean Absolute Error (MAE):** *XYZ* - **Root Mean Squared Error (RMSE):** *XYZ* - **R² Score:** *XYZ* ## 🔧 How to Use the Model To load and use the model in Python: ```python import joblib from huggingface_hub import hf_hub_download # Download the model model_path = hf_hub_download(repo_id="sohnikaavisakula/inventory-optimization", filename="inventory_model.pkl") # Load the model model = joblib.load(model_path) # Example input (adjust based on your dataset) X_test = [[5.2, 1.3, 7.8, 3.1]] # Replace with real data prediction = model.predict(X_test) print("Predicted stockout risk:", prediction)
codecandy/antiblur
codecandy
2025-03-20T04:03:33Z
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "image-generation", "flux", "safetensors", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-20T04:03:33Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - image-generation - flux - safetensors widget: - text: >- a young college student, walking on the street, campus background, photography output: url: images/2f82e6b1e5969d70a9044c19975bcdcca06b0f251d14f9c2c6095fa6.jpg - text: a young woman, New York City output: url: images/340c1ae6709f56f3d8176848653dcade93d2b5b8ade662da167ef818.jpg - text: >- happy stunning girl with long dark hair, wearing blue clothes, playing guitar, a beautiful field of flowers, colorful flowers everywhere, hills in the background output: url: images/ec9a40eed46e8d17d3db1560a6543c6e6be9ebe1e41ecd5d137c01e0.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # FLUX.1-dev-LoRA-AntiBlur This is a functional LoRA trained on FLUX.1-dev for deep DoF (Anti-Blur🔥) by [Vadim_Fedenko](https://www.shakker.ai/userpage/1f90018d803d4045b8dec4d627915098/publish) on [Shakker AI](https://www.shakker.ai/modelinfo/5c3fa3f1d5034e63be325196eae0b4f6?from=search). It may not be fancy, but it works. <div class="container"> <img src="./poster.jpg" width="1024"/> </div> <!-- ## Showcases <Gallery /> --> ## Comparison The following example shows a simple comparison with FLUX.1-dev under the same parameter setting. <div class="container"> <img src="./compare1.png" width="1024"/> </div> It is worth noting that this LoRA has very little damage to image quality while enhancing the depth of field, and can be used together with other components, such as ControlNet. We regard it as a basic functional LoRA. <div class="container"> <img src="./compare2.png" width="1024"/> </div> ## Trigger words The trigger word is not required. The recommended scale is `1.0` to `1.5` in diffusers. ## Inference ```python import torch from diffusers import FluxPipeline pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16) pipe.load_lora_weights("Shakker-Labs/FLUX.1-dev-LoRA-AntiBlur", weight_name="FLUX-dev-lora-AntiBlur.safetensors") pipe.fuse_lora(lora_scale=1.5) pipe.to("cuda") prompt = "a young college student, walking on the street, campus background, photography" image = pipe(prompt, num_inference_steps=24, guidance_scale=3.5, width=768, height=1024, ).images[0] image.save(f"example.png") ``` ## Online Inference You can also run this model at [Shakker AI](https://www.shakker.ai/modelinfo/5c3fa3f1d5034e63be325196eae0b4f6?from=search), where we provide an online interface to generate images. ## Acknowledgements This model is trained by our copyrighted users [Vadim_Fedenko](https://www.shakker.ai/userpage/1f90018d803d4045b8dec4d627915098/publish). We release this model under permissions. The model follows [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
toilaluan/latent-lm-vae-z6-decoder
toilaluan
2025-03-20T04:03:26Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-20T02:40:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
toilaluan/latent-lm-vae-z6-encoder
toilaluan
2025-03-20T04:03:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-20T02:39:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF
mradermacher
2025-03-20T04:00:18Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:dutti/UnslopNemo-Mag-Mell_T-1", "base_model:quantized:dutti/UnslopNemo-Mag-Mell_T-1", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-03-20T00:47:54Z
--- base_model: dutti/UnslopNemo-Mag-Mell_T-1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/dutti/UnslopNemo-Mag-Mell_T-1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
locuslab/base-smollm2-1.7b-score0_baseline20p_then_mix_rephrase123_with_mild_refusal45_metadata_5p-600B
locuslab
2025-03-20T03:59:19Z
0
0
null
[ "pytorch", "llama", "model", "transformer", "smollm2", "license:mit", "region:us" ]
null
2025-03-20T03:28:59Z
--- version: main family: smollm2-1.7b model_name: score0_baseline20p_then_mix_rephrase123_with_mild_refusal45_metadata_5p-600B license: mit tags: - model - transformer - smollm2 --- # SmolLM2 score0_baseline20p_then_mix_rephrase123_with_mild_refusal45_metadata_5p-600B (Version: main) ## Model Details - **Architecture:** SmolLM2 - **Parameters:** 1.7B ## Training Configuration ```yaml optimizer: class_path: torch.optim.AdamW init_args: lr: 0.0005 weight_decay: 0.01 precision: bf16-mixed seed: 42 train: global_batch_size: 1024 max_seq_length: 2048 max_tokens: 600000000000 micro_batch_size: 8 ``` ## Model Loading and Revision System This repository hosts multiple revisions of the model. To load a specific revision, use the `revision` parameter. For example: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("locuslab/score0_baseline20p_then_mix_rephrase123_with_mild_refusal45_metadata_5p-600B", revision="final") tokenizer = AutoTokenizer.from_pretrained("locuslab/score0_baseline20p_then_mix_rephrase123_with_mild_refusal45_metadata_5p-600B", revision="final") ``` Replace `"final"` with the desired revision.
pasukka/detail-classifier-new-app-v.10
pasukka
2025-03-20T03:58:19Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-20T03:30:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SHEN0829/whisper-turbo_fine_tune1
SHEN0829
2025-03-20T03:53:29Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "zh", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-03-20T02:29:32Z
--- library_name: transformers language: - zh license: mit base_model: openai/whisper-large-v3-turbo tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: whisper-turbo_fine_tune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-turbo_fine_tune This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2246 - Cer: 12.4782 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.1535 | 1.4184 | 1000 | 0.2609 | 13.4480 | | 0.0729 | 2.8369 | 2000 | 0.2373 | 12.2139 | | 0.0202 | 4.2553 | 3000 | 0.2397 | 13.2842 | | 0.0079 | 5.6738 | 4000 | 0.2266 | 9.7511 | | 0.001 | 7.0922 | 5000 | 0.2246 | 12.4782 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1 - Datasets 3.2.0 - Tokenizers 0.21.0
StrangeSX/NNN-BNFT-32-004-fnec
StrangeSX
2025-03-20T03:52:17Z
0
0
transformers
[ "transformers", "safetensors", "camembert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-03-20T03:51:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Romain-XV/b44d81d1-acf0-4a71-bad8-cb1bbcca529e
Romain-XV
2025-03-20T03:49:16Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:adapter:Qwen/Qwen2.5-3B-Instruct", "license:other", "region:us" ]
null
2025-03-20T01:27:09Z
--- library_name: peft license: other base_model: Qwen/Qwen2.5-3B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: b44d81d1-acf0-4a71-bad8-cb1bbcca529e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-3B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d07706475c9111d1_train_data.json ds_type: json format: custom path: /workspace/input_data/d07706475c9111d1_train_data.json type: field_input: text field_instruction: messages field_output: tools format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: Romain-XV/b44d81d1-acf0-4a71-bad8-cb1bbcca529e hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.00025 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 2958 micro_batch_size: 4 mlflow_experiment_name: /tmp/d07706475c9111d1_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.044705124995529484 wandb_entity: null wandb_mode: online wandb_name: 4700666c-d716-4c84-a87b-c76fa5df3349 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 4700666c-d716-4c84-a87b-c76fa5df3349 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b44d81d1-acf0-4a71-bad8-cb1bbcca529e This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00025 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 2958 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4272 | 0.0003 | 1 | 0.7962 | | 0.0013 | 0.0300 | 100 | 0.0008 | | 0.0 | 0.0599 | 200 | 0.0001 | | 0.0 | 0.0899 | 300 | 0.0000 | | 0.0012 | 0.1198 | 400 | 0.0000 | | 0.0 | 0.1498 | 500 | 0.0002 | | 0.0 | 0.1797 | 600 | 0.0000 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF
mradermacher
2025-03-20T03:47:41Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:dutti/UnslopNemo-Mag-Mell_T-1", "base_model:quantized:dutti/UnslopNemo-Mag-Mell_T-1", "endpoints_compatible", "region:us" ]
null
2025-03-19T23:46:35Z
--- base_model: dutti/UnslopNemo-Mag-Mell_T-1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/dutti/UnslopNemo-Mag-Mell_T-1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/UnslopNemo-Mag-Mell_T-1-GGUF/resolve/main/UnslopNemo-Mag-Mell_T-1.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Alphatao/dd25756c-6a8d-4ec0-b8a1-b1f456f6a333
Alphatao
2025-03-20T03:45:42Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-03-19T22:39:16Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: dd25756c-6a8d-4ec0-b8a1-b1f456f6a333 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-0.5B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 19fd35b02e02d35a_train_data.json ds_type: json format: custom path: /workspace/input_data/19fd35b02e02d35a_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null device_map: ? '' : 0,1,2,3,4,5,6,7 early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null flash_attention: true gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: Alphatao/dd25756c-6a8d-4ec0-b8a1-b1f456f6a333 hub_repo: null hub_strategy: null hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj - o_proj lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 8832 micro_batch_size: 4 mlflow_experiment_name: /tmp/19fd35b02e02d35a_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.044897409419476494 wandb_entity: null wandb_mode: online wandb_name: b3cd6cf2-8402-4373-a1f6-7aa530c7ed80 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: b3cd6cf2-8402-4373-a1f6-7aa530c7ed80 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # dd25756c-6a8d-4ec0-b8a1-b1f456f6a333 This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 6648 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.5935 | 0.0003 | 1 | 2.7137 | | 2.4756 | 0.0301 | 100 | 2.4414 | | 2.417 | 0.0602 | 200 | 2.3848 | | 2.1331 | 0.0903 | 300 | 2.3462 | | 2.0593 | 0.1203 | 400 | 2.3140 | | 2.0063 | 0.1504 | 500 | 2.2910 | | 2.4628 | 0.1805 | 600 | 2.2694 | | 2.2041 | 0.2106 | 700 | 2.2530 | | 2.4011 | 0.2407 | 800 | 2.2392 | | 2.2987 | 0.2708 | 900 | 2.2226 | | 2.199 | 0.3008 | 1000 | 2.2099 | | 2.2245 | 0.3309 | 1100 | 2.1960 | | 2.375 | 0.3610 | 1200 | 2.1850 | | 2.2182 | 0.3911 | 1300 | 2.1771 | | 2.3893 | 0.4212 | 1400 | 2.1658 | | 2.1014 | 0.4513 | 1500 | 2.1578 | | 2.1474 | 0.4813 | 1600 | 2.1484 | | 2.4473 | 0.5114 | 1700 | 2.1396 | | 1.9483 | 0.5415 | 1800 | 2.1326 | | 2.1937 | 0.5716 | 1900 | 2.1209 | | 2.2298 | 0.6017 | 2000 | 2.1139 | | 2.1117 | 0.6318 | 2100 | 2.1069 | | 2.2471 | 0.6619 | 2200 | 2.0990 | | 2.1825 | 0.6919 | 2300 | 2.0947 | | 2.1731 | 0.7220 | 2400 | 2.0892 | | 1.8862 | 0.7521 | 2500 | 2.0825 | | 2.1224 | 0.7822 | 2600 | 2.0744 | | 1.9015 | 0.8123 | 2700 | 2.0710 | | 2.103 | 0.8424 | 2800 | 2.0637 | | 2.0056 | 0.8724 | 2900 | 2.0575 | | 1.8938 | 0.9025 | 3000 | 2.0523 | | 2.1503 | 0.9326 | 3100 | 2.0460 | | 2.2166 | 0.9627 | 3200 | 2.0415 | | 2.1761 | 0.9928 | 3300 | 2.0358 | | 1.9747 | 1.0229 | 3400 | 2.0398 | | 1.6468 | 1.0529 | 3500 | 2.0353 | | 1.7083 | 1.0830 | 3600 | 2.0323 | | 1.9831 | 1.1131 | 3700 | 2.0292 | | 1.8527 | 1.1432 | 3800 | 2.0236 | | 1.9907 | 1.1733 | 3900 | 2.0209 | | 1.9898 | 1.2034 | 4000 | 2.0193 | | 1.9063 | 1.2335 | 4100 | 2.0153 | | 1.674 | 1.2635 | 4200 | 2.0101 | | 1.7583 | 1.2936 | 4300 | 2.0083 | | 2.076 | 1.3237 | 4400 | 2.0045 | | 1.92 | 1.3538 | 4500 | 2.0034 | | 2.0666 | 1.3839 | 4600 | 1.9988 | | 1.8152 | 1.4140 | 4700 | 1.9958 | | 1.6996 | 1.4440 | 4800 | 1.9938 | | 1.7863 | 1.4741 | 4900 | 1.9926 | | 1.9677 | 1.5042 | 5000 | 1.9888 | | 1.9768 | 1.5343 | 5100 | 1.9879 | | 1.7981 | 1.5644 | 5200 | 1.9857 | | 1.7892 | 1.5945 | 5300 | 1.9841 | | 1.8826 | 1.6245 | 5400 | 1.9830 | | 1.8107 | 1.6546 | 5500 | 1.9810 | | 2.01 | 1.6847 | 5600 | 1.9790 | | 1.789 | 1.7148 | 5700 | 1.9787 | | 1.6017 | 1.7449 | 5800 | 1.9773 | | 1.8574 | 1.7750 | 5900 | 1.9767 | | 1.695 | 1.8051 | 6000 | 1.9758 | | 1.8974 | 1.8351 | 6100 | 1.9752 | | 1.7432 | 1.8652 | 6200 | 1.9752 | | 1.7931 | 1.8953 | 6300 | 1.9748 | | 1.9937 | 1.9254 | 6400 | 1.9747 | | 2.2055 | 1.9555 | 6500 | 1.9746 | | 1.8637 | 1.9856 | 6600 | 1.9745 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
maanasharma5/dialect-debiasing-gpt2-medium-translated-pnlogmse-e1-r5_eval-n10.0
maanasharma5
2025-03-20T03:41:37Z
0
0
peft
[ "peft", "safetensors", "gpt2", "arxiv:1910.09700", "base_model:openai-community/gpt2-medium", "base_model:adapter:openai-community/gpt2-medium", "region:us" ]
null
2025-03-20T03:41:23Z
--- base_model: gpt2-medium library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OnlineIPO1-0317153039-epoch-6
vectorzhou
2025-03-20T03:40:22Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-20T03:37:18Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OnlineIPO1 tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OnlineIPO1 This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OnlineIPO1-0317153039-epoch-6", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zhourunlongvector/nlhf/runs/oo2oec73) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.13.0 - Transformers: 4.48.0 - Pytorch: 2.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
LYang123/deepseek_talk_model
LYang123
2025-03-20T03:39:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit", "base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-20T03:38:40Z
--- base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** LYang123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
maanasharma5/dialect-debiasing-gpt2-medium-translated-pnlogmse-e1-r100_eval-n10.0
maanasharma5
2025-03-20T03:37:57Z
0
0
peft
[ "peft", "safetensors", "gpt2", "arxiv:1910.09700", "base_model:openai-community/gpt2-medium", "base_model:adapter:openai-community/gpt2-medium", "region:us" ]
null
2025-03-20T03:37:54Z
--- base_model: gpt2-medium library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
Willy030125/command-r7b-12-2024-gguf
Willy030125
2025-03-20T03:37:06Z
0
0
null
[ "gguf", "base_model:CohereForAI/c4ai-command-r7b-12-2024", "base_model:quantized:CohereForAI/c4ai-command-r7b-12-2024", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-20T01:52:02Z
--- license: cc-by-nc-4.0 base_model: - CohereForAI/c4ai-command-r7b-12-2024 --- Quantized from model: <a href="https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024">CohereForAI/c4ai-command-r7b-12-2024</a> The model was quantized to GGUF format using these: - Model loaded with Transformers: v4.48.3 - Converted to gguf with Transformers: v4.49.0 (from requirements.txt llama.cpp) - Llama.cpp commit: <a href="https://github.com/ggml-org/llama.cpp/tree/7841fc723e059d1fd9640e5c0ef19050fcc7c698">@7841fc7</a> (Compatible with Llama-cpp-python v0.3.8)
texanrangee/64ebb989-b179-4307-8c07-7f577d282c1c
texanrangee
2025-03-20T03:36:09Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-19T23:23:37Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AJosh/emotion
AJosh
2025-03-20T03:33:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-20T03:26:50Z
--- license: apache-2.0 ---
channudam/unet2dcon-khm-35
channudam
2025-03-20T03:29:35Z
13
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "km", "license:mit", "region:us" ]
text-to-image
2025-03-18T06:34:07Z
--- license: mit library_name: diffusers language: - km pipeline_tag: text-to-image --- # Welcome to Khmer Text Image Generation! This model is based on UNet2DConditional and is designed to generate Khmer text images. ## Model Overview This model is a conditional text-to-image generation model, meaning it requires text input encoded using the <b> channudam/roberta-khm-35</b> tokenizer and encoder which is available in this collection. The model was trained from scratch without any pre-trained initialization, ensuring that it learns Khmer text generation from the ground up. ## Usage & Fine-Tuning For optimal performance, fine-tuning on your own dataset is recommended. The model serves as a foundational framework that can be further refined for specific downstream tasks. ## Dataset The dataset used for training is publicly available on Kaggle <br>🔗 Khmer Text Recognition Dataset: https://www.kaggle.com/datasets/emhengly/khmer-text-recognition-dataset/data</br> ## Example Usage To generate Khmer text images using the **UNet2DConditional** model, use the following example: ```python import torch import matplotlib.pylab as plt from diffusers import UNet2DConditionModel, DDPMScheduler from transformers import RobertaTokenizerFast, RobertaModel # Load the UNet model and tokenizer model = UNet2DConditionModel.from_pretrained("channudam/unet2dcon-khm-35").to("cuda") tokenizer = RobertaTokenizerFast.from_pretrained("channudam/roberta-khm-35") text_encoder = RobertaModel.from_pretrained("channudam/roberta-khm-35").to("cuda") # Load the DDPM scheduler scheduler = DDPMScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000, ) # Generate random noise for image generation batch_size = 1 image_width, image_height, channels = 64, 32, 1 # Set manual seed for reproducibility generator = torch.Generator(device="cuda").manual_seed(42) latents = torch.randn((batch_size, channels, image_height, image_width), device="cuda", generator=generator) # Encode input text text = "តោះទៅ" # Example Khmer text input_ids = tokenizer(text, max_length=35, padding="max_length", truncation=True, return_tensors="pt")['input_ids'].to("cuda") encoder_hidden_states = text_encoder(input_ids)[0] # Denoising loop scheduler.set_timesteps(50) for t in scheduler.timesteps: with torch.no_grad(): noise_pred = model(latents, t, encoder_hidden_states)[0] latents = scheduler.step(noise_pred, t, latents).prev_sample # Display results print("Encoded Text: ", input_ids) print("Decoded Text: ", tokenizer.batch_decode(input_ids)) print("Text Embedding Shape: ", encoder_hidden_states.shape) # Convert latents to image plt.imshow(((latents[0].permute(1, 2, 0) + 1.0) * 127.5).cpu().type(torch.uint8).numpy(), cmap="gray") plt.axis("off") plt.show() ``` ![Generated Khmer Text Image](https://huggingface.co/channudam/unet2dcon-khm-35/resolve/main/output.png)
lilelife/SyntheOcc
lilelife
2025-03-20T03:28:46Z
0
1
diffusers
[ "diffusers", "safetensors", "image-to-image", "arxiv:2410.00337", "region:us" ]
image-to-image
2024-10-02T13:19:19Z
--- pipeline_tag: image-to-image --- # SyntheOcc > SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D Semantic MPIs <br> > [Leheng Li](https://len-li.github.io), Weichao Qiu, Yingjie Cai, Xu Yan, Qing Lian, Bingbing Liu, Ying-Cong Chen SyntheOcc is a project focused on synthesizing image data under geometry control (occupancy voxel). This repository provides tools and scripts to process, train, and generate synthetic image data in the nuScenes dataset, using occupancy control. #### [Project Page](https://len-li.github.io/syntheocc-web) | [Paper](https://huggingface.co/papers/2410.00337) | [Video](https://len-li.github.io/syntheocc-web/videos/teaser-occedit.mp4) | [Checkpoint](https://huggingface.co/lilelife/SyntheOcc) Code: https://github.com/EnVision-Research/SyntheOcc ## Table of Contents - [Installation](#installation) - [Prepare Dataset](#prepare-dataset) - [Prepare Checkpoint](#prepare-checkpoint) - [Train](#train) - [Inference](#inference) ## Installation To get started with SyntheOcc, follow these steps: 1. **Clone the repository:** ```bash git clone https://github.com/Len-Li/SyntheOcc.git cd SyntheOcc ``` 2. **Set up a environment :** ```bash pip install torch torchvision transformers pip install diffusers==0.26.0.dev0 # We use a old version of diffusers, please take care of it. ``` ## Prepare Dataset To use SyntheOcc, follow the steps below: 1. **Download the NuScenes dataset:** - Register and download the dataset from the [NuScenes website](https://www.nuscenes.org/nuscenes). - Download the [train](https://github.com/JeffWang987/OpenOccupancy/releases/tag/train_pkl)/[val](https://github.com/JeffWang987/OpenOccupancy/releases/tag/val_pkl) pickle files from OpenOccupancy and put them in `data/nuscenes` folder. After preparation, you will be able to see the following directory structure: ``` SyntheOcc/ ├── data/ │ ├── nuscenes/ │ │ ├── samples/ │ │ ├── sweeps/ | | ├── v1.0-trainval/ | | ├── nuscenes_occ_infos_train.pkl | | ├── nuscenes_occ_infos_val.pkl ``` 2. **Download occupancy annotation from [SurroundOcc](https://github.com/weiyithu/SurroundOcc/blob/main/docs/data.md)** You need to generate the high resolution 0.2m occupancy from mesh vertices and put them in `data/nuscenes` folder. You can also download the 0.5m occupancy. The precision may be limited compared with 0.2m. 3. **Run the script to convert occupancy to 3D semantic multiplane images:** ```bash torchrun utils/gen_mtp.py ``` It will generate the 3D semantic MPI and save them in `data/nuscenes/samples_syntheocc_surocc/` folder. ## Prepare Checkpoint Our model is based on [stable-diffusion-v2-1](https://huggingface.co/stabilityai/stable-diffusion-v2-1). Please put them at `./SyntheOcc/ckp/`. Our checkpoint of SyntheOcc is released in [huggingface](https://huggingface.co/lilelife/SyntheOcc). If you want to use our model to run inference. Please also put them at `./SyntheOcc/ckp/`. ## Train ```bash bash train.sh ``` The details of the script are as follows: ```bash export WANDB_DISABLED=True export HF_HUB_OFFLINE=True export MODEL_DIR="./ckp/stable-diffusion-v2-1" export EXP_NAME="train_syntheocc" export OUTPUT_DIR="./ckp/$EXP_NAME" export SAVE_IMG_DIR="vis_dir/$EXP_NAME/samples" export DATA_USED="samples_syntheocc_surocc" accelerate launch --gpu_ids 0, --num_processes 1 --main_process_port 3226 train.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --width=800 \ --height=448 \ --learning_rate=2e-5 \ --num_train_epochs=6 \ --train_batch_size=1 \ --mixed_precision="fp16" \ --num_validation_images=2 \ --validation_steps=1000 \ --checkpointing_steps=5000 \ --checkpoints_total_limit=10 \ --ctrl_channel=257 \ --enable_xformers_memory_efficient_attention \ --report_to='wandb' \ --use_cbgs=True \ --mtp_path='samples_syntheocc_surocc' \ --resume_from_checkpoint="latest" ``` The training process will take 1~2 days to complete, depending on the hardware. We use a fixed batchsize=1, image resolution = (800, 448), which will take 25GB memory for each GPU. ## Inference ```bash bash infer.sh ``` You will find generated images at `./ckp/$EXP_NAME/samples`. The image is shown as follows: ![image](./ckp/demo.jpg) ## Acknowledgment Additionally, we express our gratitude to the authors of the following opensource projects: - [SurroundOcc](https://github.com/weiyithu/SurroundOcc) (Occupancy annotation) - [OpenOccupancy](https://github.com/JeffWang987/OpenOccupancy) (Occupancy annotation) - [MagicDrive](https://github.com/cure-lab/MagicDrive) (Cross-view and cross-frame attention implementation) - [Diffusers controlnet example](https://github.com/huggingface/diffusers/tree/main/examples/controlnet) (Diffusion model implementation) ## BibTeX ```bibtex @inproceedings{li2024SyntheOcc, title={SyntheOcc: Synthesize Geometric Controlled Street View Images through 3D Semantic MPIs}, author={Li, Leheng and Qiu, Weichao and Chen, Ying-Cong et.al.}, booktitle={arxiv preprint}, year={2024} } ``` This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. --- license: mit ---
drmcbride/l3-test-3b-Q8_0-GGUF
drmcbride
2025-03-20T03:25:31Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:drmcbride/l3-test-3b", "base_model:quantized:drmcbride/l3-test-3b", "endpoints_compatible", "region:us" ]
null
2025-03-20T03:25:15Z
--- base_model: drmcbride/l3-test-3b tags: - llama-cpp - gguf-my-repo --- # drmcbride/l3-test-3b-Q8_0-GGUF This model was converted to GGUF format from [`drmcbride/l3-test-3b`](https://huggingface.co/drmcbride/l3-test-3b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/drmcbride/l3-test-3b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo drmcbride/l3-test-3b-Q8_0-GGUF --hf-file l3-test-3b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo drmcbride/l3-test-3b-Q8_0-GGUF --hf-file l3-test-3b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo drmcbride/l3-test-3b-Q8_0-GGUF --hf-file l3-test-3b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo drmcbride/l3-test-3b-Q8_0-GGUF --hf-file l3-test-3b-q8_0.gguf -c 2048 ```
mlfoundations-dev/global_batchsize_1024_laradjusted2
mlfoundations-dev
2025-03-20T03:24:34Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-18T19:13:45Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: global_batchsize_1024_laradjusted2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # global_batchsize_1024_laradjusted2 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the open-thoughts/OpenThoughts-114k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000226274 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 32 - total_train_batch_size: 1024 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.3.0 - Datasets 3.1.0 - Tokenizers 0.20.3
mlfoundations-dev/global_batchsize_1024_laradjusted8
mlfoundations-dev
2025-03-20T03:19:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-18T19:14:28Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: global_batchsize_1024_laradjusted8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # global_batchsize_1024_laradjusted8 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the open-thoughts/OpenThoughts-114k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00011313708 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 32 - total_train_batch_size: 1024 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.3.0 - Datasets 3.1.0 - Tokenizers 0.20.3
knguyennguyen/Qwen2-VietMed-base
knguyennguyen
2025-03-20T03:16:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-20T03:13:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
raihanp/business-card
raihanp
2025-03-20T03:13:16Z
11
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:cahya/xlm-roberta-base-indonesian-NER", "base_model:finetune:cahya/xlm-roberta-base-indonesian-NER", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-03-18T01:26:38Z
--- library_name: transformers base_model: cahya/xlm-roberta-base-indonesian-NER tags: - generated_from_trainer model-index: - name: business-card results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # business-card This model is a fine-tuned version of [cahya/xlm-roberta-base-indonesian-NER](https://huggingface.co/cahya/xlm-roberta-base-indonesian-NER) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 12 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-256-woft
VPTQ-community
2025-03-20T03:12:57Z
20
0
null
[ "safetensors", "qwen2", "VPTQ", "Quantized", "Quantization", "arxiv:2409.17066", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-14B-Instruct", "license:other", "vptq", "region:us" ]
null
2024-09-28T15:38:40Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE base_model: - Qwen/Qwen2.5-14B-Instruct base_model_relation: quantized tags: - VPTQ - Quantized - Quantization --- **Disclaimer**: The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) The model itself is sourced from a community release. It is intended only for experimental purposes. Users are responsible for any consequences arising from the use of this model. **Note**: The PPL test results are for reference only and were collected using GPTQ testing script. ```json { "ctx_2048": { "wikitext2": 6.457276344299316 }, "ctx_4096": { "wikitext2": 5.975520610809326 }, "ctx_8192": { "wikitext2": 5.70115327835083 } } ```
VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-65536-woft
VPTQ-community
2025-03-20T03:12:33Z
8
0
null
[ "safetensors", "qwen2", "VPTQ", "Quantized", "Quantization", "arxiv:2409.17066", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-14B-Instruct", "license:other", "vptq", "region:us" ]
null
2024-09-28T15:42:31Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE base_model: - Qwen/Qwen2.5-14B-Instruct base_model_relation: quantized tags: - VPTQ - Quantized - Quantization --- **Disclaimer**: The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) The model itself is sourced from a community release. It is intended only for experimental purposes. Users are responsible for any consequences arising from the use of this model. **Note**: The PPL test results are for reference only and were collected using GPTQ testing script. ```json { "ctx_2048": { "wikitext2": 5.8772149085998535 }, "ctx_4096": { "wikitext2": 5.4326276779174805 }, "ctx_8192": { "wikitext2": 5.163432598114014 } } ```
VPTQ-community/Qwen2.5-14B-Instruct-v8-k65536-0-woft
VPTQ-community
2025-03-20T03:11:44Z
12
0
null
[ "safetensors", "qwen2", "VPTQ", "Quantized", "Quantization", "arxiv:2409.17066", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:quantized:Qwen/Qwen2.5-14B-Instruct", "license:other", "vptq", "region:us" ]
null
2024-09-28T15:40:33Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE base_model: - Qwen/Qwen2.5-14B-Instruct base_model_relation: quantized tags: - VPTQ - Quantized - Quantization --- **Disclaimer**: The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) The model itself is sourced from a community release. It is intended only for experimental purposes. Users are responsible for any consequences arising from the use of this model. **Note**: The PPL test results are for reference only and were collected using GPTQ testing script. ```json { "ctx_2048": { "wikitext2": 8.052566528320312 }, "ctx_4096": { "wikitext2": 7.470157146453857 }, "ctx_8192": { "wikitext2": 7.160165786743164 } } ```
VPTQ-community/Meta-Llama-3.1-8B-Instruct-v8-k65536-256-woft
VPTQ-community
2025-03-20T03:10:44Z
110
0
null
[ "safetensors", "llama", "VPTQ", "Quantized", "Quantization", "arxiv:2409.17066", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "vptq", "region:us" ]
null
2024-09-24T05:11:28Z
--- license: llama3.1 base_model: - meta-llama/Llama-3.1-8B-Instruct base_model_relation: quantized tags: - VPTQ - Quantized - Quantization --- **Disclaimer**: The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) The model itself is sourced from a community release. It is intended only for experimental purposes. Users are responsible for any consequences arising from the use of this model. **Note**: The PPL test results are for reference only and were collected using GPTQ testing script. ```json { "ctx_2048": { "wikitext2": 8.166712760925293 }, "ctx_4096": { "wikitext2": 7.6312713623046875 }, "ctx_8192": { "wikitext2": 7.3152079582214355 } } ```
VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-0-woft
VPTQ-community
2025-03-20T03:09:50Z
34
0
null
[ "safetensors", "qwen2", "VPTQ", "Quantized", "Quantization", "arxiv:2409.17066", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "license:other", "vptq", "region:us" ]
null
2024-09-29T02:16:40Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE base_model: - Qwen/Qwen2.5-7B-Instruct base_model_relation: quantized tags: - VPTQ - Quantized - Quantization --- **Disclaimer**: The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) The model itself is sourced from a community release. It is intended only for experimental purposes. Users are responsible for any consequences arising from the use of this model. **Note**: The PPL test results are for reference only and were collected using GPTQ testing script. ```json { "ctx_2048": { "wikitext2": 9.751266479492188 }, "ctx_4096": { "wikitext2": 9.006874084472656 }, "ctx_8192": { "wikitext2": 8.547307014465332 } } ```
lesso10/f2dce8bb-d0d4-4cf8-8970-e15aac49f9df
lesso10
2025-03-20T03:09:28Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:adapter:Qwen/Qwen2.5-3B-Instruct", "license:other", "region:us" ]
null
2025-03-20T01:28:05Z
--- library_name: peft license: other base_model: Qwen/Qwen2.5-3B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: f2dce8bb-d0d4-4cf8-8970-e15aac49f9df results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-3B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d07706475c9111d1_train_data.json ds_type: json format: custom path: /workspace/input_data/d07706475c9111d1_train_data.json type: field_input: text field_instruction: messages field_output: tools format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso10/f2dce8bb-d0d4-4cf8-8970-e15aac49f9df hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.00021 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/d07706475c9111d1_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 100 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4700666c-d716-4c84-a87b-c76fa5df3349 wandb_project: 10a wandb_run: your_name wandb_runid: 4700666c-d716-4c84-a87b-c76fa5df3349 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f2dce8bb-d0d4-4cf8-8970-e15aac49f9df This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00021 - train_batch_size: 4 - eval_batch_size: 4 - seed: 100 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 7.9294 | | 0.0004 | 0.1506 | 500 | 0.0005 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso17/bb20785e-7938-4cdd-b069-d9841b1970d9
lesso17
2025-03-20T03:09:03Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:adapter:Qwen/Qwen2.5-3B-Instruct", "license:other", "region:us" ]
null
2025-03-20T01:28:16Z
--- library_name: peft license: other base_model: Qwen/Qwen2.5-3B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: bb20785e-7938-4cdd-b069-d9841b1970d9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-3B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d07706475c9111d1_train_data.json ds_type: json format: custom path: /workspace/input_data/d07706475c9111d1_train_data.json type: field_input: text field_instruction: messages field_output: tools format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso17/bb20785e-7938-4cdd-b069-d9841b1970d9 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000217 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/d07706475c9111d1_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 170 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4700666c-d716-4c84-a87b-c76fa5df3349 wandb_project: 17a wandb_run: your_name wandb_runid: 4700666c-d716-4c84-a87b-c76fa5df3349 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bb20785e-7938-4cdd-b069-d9841b1970d9 This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000217 - train_batch_size: 4 - eval_batch_size: 4 - seed: 170 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0003 | 1 | 7.9342 | | 0.002 | 0.1506 | 500 | 0.0005 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
VPTQ-community/Qwen2.5-7B-Instruct-v8-k65536-256-woft
VPTQ-community
2025-03-20T03:09:00Z
23
0
null
[ "safetensors", "qwen2", "VPTQ", "Quantized", "Quantization", "arxiv:2409.17066", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "license:other", "vptq", "region:us" ]
null
2024-09-24T14:50:31Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE base_model: - Qwen/Qwen2.5-7B-Instruct base_model_relation: quantized tags: - VPTQ - Quantized - Quantization --- **Disclaimer**: The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) The model itself is sourced from a community release. It is intended only for experimental purposes. Users are responsible for any consequences arising from the use of this model. **Note**: The PPL test results are for reference only and were collected using GPTQ testing script. ```json { "ctx_2048": { "wikitext2": 7.946412086486816 }, "ctx_4096": { "wikitext2": 7.310400009155273 }, "ctx_8192": { "wikitext2": 6.938364028930664 } } ```
VPTQ-community/Qwen2.5-7B-Instruct-v16-k65536-65536-woft
VPTQ-community
2025-03-20T03:08:28Z
30
1
null
[ "safetensors", "qwen2", "VPTQ", "Quantized", "Quantization", "arxiv:2409.17066", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "license:other", "vptq", "region:us" ]
null
2024-09-29T02:11:02Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE base_model: - Qwen/Qwen2.5-7B-Instruct base_model_relation: quantized tags: - VPTQ - Quantized - Quantization --- **Disclaimer**: The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066) The model itself is sourced from a community release. It is intended only for experimental purposes. Users are responsible for any consequences arising from the use of this model. **Note**: The PPL test results are for reference only and were collected using GPTQ testing script. ```json { "ctx_2048": { "wikitext2": 9.281352996826172 }, "ctx_4096": { "wikitext2": 8.55495834350586 }, "ctx_8192": { "wikitext2": 8.152359962463379 } } ```
mtzig/reverse_add_replicate_eval17_small_1layer
mtzig
2025-03-20T03:02:44Z
0
0
transformers
[ "transformers", "safetensors", "nanogpt", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-03-20T02:46:30Z
--- library_name: transformers tags: - generated_from_trainer metrics: - accuracy model-index: - name: reverse_add_replicate_eval17_small_1layer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reverse_add_replicate_eval17_small_1layer This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5994 - Accuracy: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 128 - eval_batch_size: 128 - seed: 7658372 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | No log | 0 | 0 | 2.6405 | 0.0 | | 2.6234 | 0.0064 | 100 | 2.6259 | 0.0 | | 2.577 | 0.0128 | 200 | 2.5785 | 0.0 | | 2.5307 | 0.0192 | 300 | 2.5300 | 0.0 | | 2.4899 | 0.0256 | 400 | 2.4878 | 0.0 | | 2.4573 | 0.032 | 500 | 2.4559 | 0.0 | | 2.4345 | 0.0384 | 600 | 2.4337 | 0.0 | | 2.4184 | 0.0448 | 700 | 2.4186 | 0.0 | | 2.4046 | 0.0512 | 800 | 2.4096 | 0.0 | | 2.3941 | 0.0576 | 900 | 2.3994 | 0.0 | | 2.3886 | 0.064 | 1000 | 2.3996 | 0.0 | | 2.3771 | 0.0704 | 1100 | 2.4505 | 0.0 | | 2.37 | 0.0768 | 1200 | 2.4449 | 0.0 | | 2.3755 | 0.0832 | 1300 | 2.4213 | 0.0 | | 2.3742 | 0.0896 | 1400 | 2.5070 | 0.0 | | 2.3745 | 0.096 | 1500 | 2.4311 | 0.0 | | 2.3674 | 0.1024 | 1600 | 2.4830 | 0.0 | | 2.3656 | 0.1088 | 1700 | 2.4634 | 0.0 | | 2.3616 | 0.1152 | 1800 | 2.4772 | 0.0 | | 2.3681 | 0.1216 | 1900 | 2.4977 | 0.0 | | 2.3728 | 0.128 | 2000 | 2.6562 | 0.0 | | 2.3677 | 0.1344 | 2100 | 2.4819 | 0.0 | | 2.3676 | 0.1408 | 2200 | 2.4610 | 0.0 | | 2.3634 | 0.1472 | 2300 | 2.5009 | 0.0 | | 2.3705 | 0.1536 | 2400 | 2.4709 | 0.0 | | 2.3663 | 0.16 | 2500 | 2.4841 | 0.0 | | 2.3676 | 0.1664 | 2600 | 2.5541 | 0.0 | | 2.3573 | 0.1728 | 2700 | 2.4714 | 0.0 | | 2.3642 | 0.1792 | 2800 | 2.4749 | 0.0 | | 2.3626 | 0.1856 | 2900 | 2.5095 | 0.0 | | 2.365 | 0.192 | 3000 | 2.5000 | 0.0 | | 2.3592 | 0.1984 | 3100 | 2.5363 | 0.0 | | 2.3649 | 0.2048 | 3200 | 2.4799 | 0.0 | | 2.3576 | 0.2112 | 3300 | 2.4855 | 0.0 | | 2.3679 | 0.2176 | 3400 | 2.5114 | 0.0 | | 2.3647 | 0.224 | 3500 | 2.5487 | 0.0 | | 2.371 | 0.2304 | 3600 | 2.4369 | 0.0 | | 2.354 | 0.2368 | 3700 | 2.5066 | 0.0 | | 2.3581 | 0.2432 | 3800 | 2.4871 | 0.0 | | 2.364 | 0.2496 | 3900 | 2.5979 | 0.0 | | 2.3597 | 0.256 | 4000 | 2.5254 | 0.0 | | 2.3675 | 0.2624 | 4100 | 2.5234 | 0.0 | | 2.3613 | 0.2688 | 4200 | 2.4946 | 0.0 | | 2.3629 | 0.2752 | 4300 | 2.4694 | 0.0 | | 2.3609 | 0.2816 | 4400 | 2.4860 | 0.0 | | 2.355 | 0.288 | 4500 | 2.5495 | 0.0 | | 2.3633 | 0.2944 | 4600 | 2.5450 | 0.0 | | 2.3577 | 0.3008 | 4700 | 2.5079 | 0.0 | | 2.3628 | 0.3072 | 4800 | 2.5156 | 0.0 | | 2.3549 | 0.3136 | 4900 | 2.4778 | 0.0 | | 2.3621 | 0.32 | 5000 | 2.5554 | 0.0 | | 2.3563 | 0.3264 | 5100 | 2.5000 | 0.0 | | 2.3624 | 0.3328 | 5200 | 2.5690 | 0.0 | | 2.3563 | 0.3392 | 5300 | 2.4614 | 0.0 | | 2.3553 | 0.3456 | 5400 | 2.4333 | 0.0 | | 2.3573 | 0.352 | 5500 | 2.4946 | 0.0 | | 2.3586 | 0.3584 | 5600 | 2.5507 | 0.0 | | 2.3608 | 0.3648 | 5700 | 2.5246 | 0.0 | | 2.3626 | 0.3712 | 5800 | 2.4721 | 0.0 | | 2.3635 | 0.3776 | 5900 | 2.5269 | 0.0 | | 2.3555 | 0.384 | 6000 | 2.4758 | 0.0 | | 2.3607 | 0.3904 | 6100 | 2.5192 | 0.0 | | 2.3559 | 0.3968 | 6200 | 2.5747 | 0.0 | | 2.3664 | 0.4032 | 6300 | 2.4620 | 0.0 | | 2.3604 | 0.4096 | 6400 | 2.5626 | 0.0 | | 2.3647 | 0.416 | 6500 | 2.5473 | 0.0 | | 2.3624 | 0.4224 | 6600 | 2.5852 | 0.0 | | 2.3574 | 0.4288 | 6700 | 2.6200 | 0.0 | | 2.36 | 0.4352 | 6800 | 2.5269 | 0.0 | | 2.3557 | 0.4416 | 6900 | 2.5453 | 0.0 | | 2.3603 | 0.448 | 7000 | 2.5212 | 0.0 | | 2.3569 | 0.4544 | 7100 | 2.6011 | 0.0 | | 2.3544 | 0.4608 | 7200 | 2.5631 | 0.0 | | 2.3613 | 0.4672 | 7300 | 2.5656 | 0.0 | | 2.3565 | 0.4736 | 7400 | 2.5427 | 0.0 | | 2.3551 | 0.48 | 7500 | 2.4880 | 0.0 | | 2.3585 | 0.4864 | 7600 | 2.5707 | 0.0 | | 2.3576 | 0.4928 | 7700 | 2.5616 | 0.0 | | 2.3632 | 0.4992 | 7800 | 2.5697 | 0.0 | | 2.3579 | 0.5056 | 7900 | 2.5803 | 0.0 | | 2.3593 | 0.512 | 8000 | 2.6355 | 0.0 | | 2.3604 | 0.5184 | 8100 | 2.5355 | 0.0 | | 2.3594 | 0.5248 | 8200 | 2.5198 | 0.0 | | 2.357 | 0.5312 | 8300 | 2.5762 | 0.0 | | 2.3487 | 0.5376 | 8400 | 2.5462 | 0.0 | | 2.3652 | 0.544 | 8500 | 2.5878 | 0.0 | | 2.3549 | 0.5504 | 8600 | 2.5376 | 0.0 | | 2.3516 | 0.5568 | 8700 | 2.5517 | 0.0 | | 2.358 | 0.5632 | 8800 | 2.5280 | 0.0 | | 2.3587 | 0.5696 | 8900 | 2.5489 | 0.0 | | 2.3646 | 0.576 | 9000 | 2.6044 | 0.0 | | 2.3549 | 0.5824 | 9100 | 2.5392 | 0.0 | | 2.3579 | 0.5888 | 9200 | 2.6203 | 0.0 | | 2.3654 | 0.5952 | 9300 | 2.5952 | 0.0 | | 2.3657 | 0.6016 | 9400 | 2.5479 | 0.0 | | 2.3571 | 0.608 | 9500 | 2.5350 | 0.0 | | 2.3515 | 0.6144 | 9600 | 2.6317 | 0.0 | | 2.3565 | 0.6208 | 9700 | 2.5772 | 0.0 | | 2.3534 | 0.6272 | 9800 | 2.6011 | 0.0 | | 2.3574 | 0.6336 | 9900 | 2.4998 | 0.0 | | 2.3553 | 0.64 | 10000 | 2.5933 | 0.0 | | 2.3443 | 0.6464 | 10100 | 2.5925 | 0.0 | | 2.3581 | 0.6528 | 10200 | 2.6502 | 0.0 | | 2.3488 | 0.6592 | 10300 | 2.6558 | 0.0 | | 2.3659 | 0.6656 | 10400 | 2.6271 | 0.0 | | 2.353 | 0.672 | 10500 | 2.5513 | 0.0 | | 2.3497 | 0.6784 | 10600 | 2.6017 | 0.0 | | 2.3573 | 0.6848 | 10700 | 2.5998 | 0.0 | | 2.3642 | 0.6912 | 10800 | 2.5925 | 0.0 | | 2.3522 | 0.6976 | 10900 | 2.4902 | 0.0 | | 2.3543 | 0.704 | 11000 | 2.5761 | 0.0 | | 2.3538 | 0.7104 | 11100 | 2.5737 | 0.0 | | 2.3545 | 0.7168 | 11200 | 2.5827 | 0.0 | | 2.3586 | 0.7232 | 11300 | 2.6190 | 0.0 | | 2.3575 | 0.7296 | 11400 | 2.5708 | 0.0 | | 2.3573 | 0.736 | 11500 | 2.5409 | 0.0 | | 2.3575 | 0.7424 | 11600 | 2.5762 | 0.0 | | 2.3576 | 0.7488 | 11700 | 2.6299 | 0.0 | | 2.3487 | 0.7552 | 11800 | 2.5414 | 0.0 | | 2.3623 | 0.7616 | 11900 | 2.5767 | 0.0 | | 2.3599 | 0.768 | 12000 | 2.5446 | 0.0 | | 2.3506 | 0.7744 | 12100 | 2.5832 | 0.0 | | 2.3546 | 0.7808 | 12200 | 2.5563 | 0.0 | | 2.3543 | 0.7872 | 12300 | 2.5601 | 0.0 | | 2.3507 | 0.7936 | 12400 | 2.5719 | 0.0 | | 2.3524 | 0.8 | 12500 | 2.5835 | 0.0 | | 2.3447 | 0.8064 | 12600 | 2.5615 | 0.0 | | 2.3573 | 0.8128 | 12700 | 2.6363 | 0.0 | | 2.356 | 0.8192 | 12800 | 2.6349 | 0.0 | | 2.3544 | 0.8256 | 12900 | 2.5914 | 0.0 | | 2.3638 | 0.832 | 13000 | 2.5714 | 0.0 | | 2.3591 | 0.8384 | 13100 | 2.6121 | 0.0 | | 2.3565 | 0.8448 | 13200 | 2.5863 | 0.0 | | 2.3481 | 0.8512 | 13300 | 2.6126 | 0.0 | | 2.358 | 0.8576 | 13400 | 2.5951 | 0.0 | | 2.3518 | 0.864 | 13500 | 2.6111 | 0.0 | | 2.3445 | 0.8704 | 13600 | 2.6072 | 0.0 | | 2.3466 | 0.8768 | 13700 | 2.6104 | 0.0 | | 2.3613 | 0.8832 | 13800 | 2.5829 | 0.0 | | 2.3506 | 0.8896 | 13900 | 2.6030 | 0.0 | | 2.3478 | 0.896 | 14000 | 2.5717 | 0.0 | | 2.3618 | 0.9024 | 14100 | 2.6115 | 0.0 | | 2.3628 | 0.9088 | 14200 | 2.5984 | 0.0 | | 2.3504 | 0.9152 | 14300 | 2.6091 | 0.0 | | 2.3596 | 0.9216 | 14400 | 2.6084 | 0.0 | | 2.3556 | 0.928 | 14500 | 2.5812 | 0.0 | | 2.3624 | 0.9344 | 14600 | 2.6058 | 0.0 | | 2.3564 | 0.9408 | 14700 | 2.5861 | 0.0 | | 2.3649 | 0.9472 | 14800 | 2.5941 | 0.0 | | 2.3522 | 0.9536 | 14900 | 2.5955 | 0.0 | | 2.3436 | 0.96 | 15000 | 2.5882 | 0.0 | | 2.3552 | 0.9664 | 15100 | 2.6067 | 0.0 | | 2.3537 | 0.9728 | 15200 | 2.5985 | 0.0 | | 2.36 | 0.9792 | 15300 | 2.5967 | 0.0 | | 2.3605 | 0.9856 | 15400 | 2.5998 | 0.0 | | 2.3544 | 0.992 | 15500 | 2.5996 | 0.0 | | 2.3535 | 0.9984 | 15600 | 2.5994 | 0.0 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.1
heisejiasuo/DyFLUX
heisejiasuo
2025-03-20T03:01:34Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-20T02:33:22Z
--- license: apache-2.0 ---
yusuke111/llm-jp-3-3.7b-databricks-dolly-15k-ja-gozaru
yusuke111
2025-03-20T03:00:02Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:llm-jp/llm-jp-3-3.7b", "base_model:finetune:llm-jp/llm-jp-3-3.7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-20T02:59:50Z
--- base_model: llm-jp/llm-jp-3-3.7b tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** yusuke111 - **License:** apache-2.0 - **Finetuned from model :** llm-jp/llm-jp-3-3.7b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bartowski/soob3123_amoral-gemma3-4B-GGUF
bartowski
2025-03-20T02:59:54Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "gemma3", "analytical-tasks", "bias-neutralization", "uncensored", "text-generation", "en", "base_model:soob3123/amoral-gemma3-4B", "base_model:quantized:soob3123/amoral-gemma3-4B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-03-20T02:43:55Z
--- quantized_by: bartowski pipeline_tag: text-generation license: apache-2.0 base_model_relation: quantized language: - en base_model: soob3123/amoral-gemma3-4B tags: - text-generation-inference - transformers - gemma3 - analytical-tasks - bias-neutralization - uncensored --- ## Llamacpp imatrix Quantizations of amoral-gemma3-4B by soob3123 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4925">b4925</a> for quantization. Original model: https://huggingface.co/soob3123/amoral-gemma3-4B All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project ## Prompt format ``` <bos><start_of_turn>user {system_prompt} {prompt}<end_of_turn> <start_of_turn>model <end_of_turn> <start_of_turn>model ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [amoral-gemma3-4B-bf16.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-bf16.gguf) | bf16 | 7.77GB | false | Full BF16 weights. | | [amoral-gemma3-4B-Q8_0.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q8_0.gguf) | Q8_0 | 4.13GB | false | Extremely high quality, generally unneeded but max available quant. | | [amoral-gemma3-4B-Q6_K_L.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q6_K_L.gguf) | Q6_K_L | 3.35GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [amoral-gemma3-4B-Q6_K.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q6_K.gguf) | Q6_K | 3.19GB | false | Very high quality, near perfect, *recommended*. | | [amoral-gemma3-4B-Q5_K_L.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q5_K_L.gguf) | Q5_K_L | 2.99GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [amoral-gemma3-4B-Q5_K_M.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q5_K_M.gguf) | Q5_K_M | 2.83GB | false | High quality, *recommended*. | | [amoral-gemma3-4B-Q5_K_S.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q5_K_S.gguf) | Q5_K_S | 2.76GB | false | High quality, *recommended*. | | [amoral-gemma3-4B-Q4_K_L.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q4_K_L.gguf) | Q4_K_L | 2.65GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [amoral-gemma3-4B-Q4_1.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q4_1.gguf) | Q4_1 | 2.56GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. | | [amoral-gemma3-4B-Q4_K_M.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q4_K_M.gguf) | Q4_K_M | 2.49GB | false | Good quality, default size for most use cases, *recommended*. | | [amoral-gemma3-4B-Q3_K_XL.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q3_K_XL.gguf) | Q3_K_XL | 2.40GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [amoral-gemma3-4B-Q4_K_S.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q4_K_S.gguf) | Q4_K_S | 2.38GB | false | Slightly lower quality with more space savings, *recommended*. | | [amoral-gemma3-4B-Q4_0.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q4_0.gguf) | Q4_0 | 2.37GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. | | [amoral-gemma3-4B-IQ4_NL.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-IQ4_NL.gguf) | IQ4_NL | 2.36GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. | | [amoral-gemma3-4B-IQ4_XS.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-IQ4_XS.gguf) | IQ4_XS | 2.26GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [amoral-gemma3-4B-Q3_K_L.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q3_K_L.gguf) | Q3_K_L | 2.24GB | false | Lower quality but usable, good for low RAM availability. | | [amoral-gemma3-4B-Q3_K_M.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q3_K_M.gguf) | Q3_K_M | 2.10GB | false | Low quality. | | [amoral-gemma3-4B-IQ3_M.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-IQ3_M.gguf) | IQ3_M | 1.99GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [amoral-gemma3-4B-Q3_K_S.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q3_K_S.gguf) | Q3_K_S | 1.94GB | false | Low quality, not recommended. | | [amoral-gemma3-4B-Q2_K_L.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q2_K_L.gguf) | Q2_K_L | 1.89GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [amoral-gemma3-4B-IQ3_XS.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-IQ3_XS.gguf) | IQ3_XS | 1.86GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [amoral-gemma3-4B-Q2_K.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-Q2_K.gguf) | Q2_K | 1.73GB | false | Very low quality but surprisingly usable. | | [amoral-gemma3-4B-IQ3_XXS.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-IQ3_XXS.gguf) | IQ3_XXS | 1.69GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. | | [amoral-gemma3-4B-IQ2_M.gguf](https://huggingface.co/bartowski/soob3123_amoral-gemma3-4B-GGUF/blob/main/soob3123_amoral-gemma3-4B-IQ2_M.gguf) | IQ2_M | 1.54GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. ## Downloading using huggingface-cli <details> <summary>Click to view download instructions</summary> First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/soob3123_amoral-gemma3-4B-GGUF --include "soob3123_amoral-gemma3-4B-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/soob3123_amoral-gemma3-4B-GGUF --include "soob3123_amoral-gemma3-4B-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (soob3123_amoral-gemma3-4B-Q8_0) or download them all in place (./) </details> ## ARM/AVX information Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass. Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly. As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0. Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase. <details> <summary>Click to view Q4_0_X_X information (deprecated</summary> I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking. <details> <summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary> | model | size | params | backend | threads | test | t/s | % (vs Q4_0) | | ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% | | qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% | | qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% | | qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% | Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation </details> </details> ## Which file should I choose? <details> <summary>Click here for details</summary> A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. </details> ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset. Thank you ZeroWw for the inspiration to experiment with embed/output. Thank you to LM Studio for sponsoring my work. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
deddyext/mistral-finetuned-nbs
deddyext
2025-03-20T02:58:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-20T02:58:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
maanasharma5/dialect-debiasing-gpt2-medium-translated-pnlogmse-e1-r2_eval-n5.0
maanasharma5
2025-03-20T02:54:36Z
0
0
peft
[ "peft", "safetensors", "gpt2", "arxiv:1910.09700", "base_model:openai-community/gpt2-medium", "base_model:adapter:openai-community/gpt2-medium", "region:us" ]
null
2025-03-20T02:54:24Z
--- base_model: gpt2-medium library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
pasukka/detail-classifier-new-app-v.9
pasukka
2025-03-20T02:50:55Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-20T02:49:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
icefog72/Ice0.95-19.03-RP
icefog72
2025-03-20T02:50:39Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-19T22:59:12Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # Ice0.95-19.03-RP This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the Passthrough merge method using H:\FModels\Ice0.80-03.02-RP + E:\FModels\Fog0.01-19.03-RP-lora as a base. ### Models Merged The following models were included in the merge: ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: H:\FModels\Ice0.80-03.02-RP+E:\FModels\Fog0.01-19.03-RP-lora dtype: bfloat16 merge_method: passthrough models: - model: H:\FModels\Ice0.80-03.02-RP+E:\FModels\Fog0.01-19.03-RP-lora ```
TongZheng1999/gemma-2-9b-it-star-mixed_direct-OP-final_v2_10-2-3Rounds-iter-3
TongZheng1999
2025-03-20T02:50:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "alignment-handbook", "trl", "sft", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-20T02:23:11Z
--- library_name: transformers model_name: gemma-2-9b-it-star-mixed_direct-OP-final_v2_10-2-3Rounds-iter-3 tags: - generated_from_trainer - alignment-handbook - trl - sft licence: license --- # Model Card for gemma-2-9b-it-star-mixed_direct-OP-final_v2_10-2-3Rounds-iter-3 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="TongZheng1999/gemma-2-9b-it-star-mixed_direct-OP-final_v2_10-2-3Rounds-iter-3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/139sdyhw) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.0 - Pytorch: 2.6.0 - Datasets: 3.3.1 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Es1v/Sentiment_tweets_distilbert
Es1v
2025-03-20T02:46:22Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-20T02:29:33Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - f1 model-index: - name: Sentiment_tweets_distilbert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment_tweets_distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1522 - F1: 0.9377 - Acc: 0.9375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Acc | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 0.1507 | 1.0 | 500 | 0.1922 | 0.9331 | 0.9325 | | 0.1156 | 2.0 | 1000 | 0.1507 | 0.9404 | 0.94 | | 0.0807 | 3.0 | 1500 | 0.1522 | 0.9377 | 0.9375 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
n31e/Dolphin3.0-Llama3.2-3B
n31e
2025-03-20T02:44:18Z
0
0
null
[ "safetensors", "text-generation", "llama", "fine-tuned", "en", "license:apache-2.0", "region:us" ]
text-generation
2025-03-20T02:24:17Z
--- language: en license: apache-2.0 tags: - text-generation - llama - fine-tuned model-index: - name: Dolphin3.0-Llama3.2-3B results: [] --- # Dolphin3.0-Llama3.2-3B-finetuned-20250320 ## Model Description This model was created by fine-tuning cognitivecomputations/Dolphin3.0-Llama3.2-3B on the following datasets: sdiazlor/python-reasoning-dataset, fka/awesome-chatgpt-prompts, THUDM/AgentInstruct, O1-OPEN/OpenO1-SFT ## Training Configuration - Base model: cognitivecomputations/Dolphin3.0-Llama3.2-3B - Fine-tuning method: LoRA (r=8, alpha=16) - Target modules: q_proj, v_proj - Training date: 2025-03-20 - Learning rate: 0.0001 - Max sequence length: 768 - Training steps: 400 ## Example Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("n31e/Dolphin3.0-Llama3.2-3B-finetuned-20250320") tokenizer = AutoTokenizer.from_pretrained("n31e/Dolphin3.0-Llama3.2-3B-finetuned-20250320") # Format prompt according to model's expected format prompt = "<|user|>\nYour prompt here\n<|assistant|>\n" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) # Generate response outputs = model.generate( inputs["input_ids"], max_length=512, temperature=0.7, top_p=0.9, repetition_penalty=1.2, do_sample=True, ) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ```
BlitherBoom/AutoDroid-V2
BlitherBoom
2025-03-20T02:39:45Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-19T12:21:08Z
--- library_name: transformers license: other base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: AutoDroid-V2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AutoDroid-V2 This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the autodroid dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.4.0+cu118 - Datasets 3.0.0 - Tokenizers 0.20.3
Yasuo2k5/Albert_vn
Yasuo2k5
2025-03-20T02:39:19Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-20T02:39:19Z
--- license: apache-2.0 ---
allen9926/LLM
allen9926
2025-03-20T02:36:31Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-20T02:35:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shubingxl/LLM_demo
shubingxl
2025-03-20T02:36:10Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-20T02:35:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sitenote/ticker-news-classifier
sitenote
2025-03-20T02:36:01Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "deberta-v3", "en", "dataset:sitenote/ticker_news_classifier_2", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-12T03:35:39Z
--- license: apache-2.0 datasets: - sitenote/ticker_news_classifier_2 language: - en metrics: - f1 base_model: - microsoft/deberta-v3-base tags: - transformers - text-classification - deberta-v3 ---
tscstudios/tae7fe7eqtstdxmg4wciuxyxmgv2_ffef9a94-4b72-4266-b6de-cf01058cad51
tscstudios
2025-03-20T02:32:24Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-03-20T02:32:22Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Tae7Fe7Eqtstdxmg4Wciuxyxmgv2_Ffef9A94 4B72 4266 B6De Cf01058Cad51 <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('tscstudios/tae7fe7eqtstdxmg4wciuxyxmgv2_ffef9a94-4b72-4266-b6de-cf01058cad51', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
zijianh/Qwen-2.5-7B-Simple-RL-length-penalty-low-medium-high
zijianh
2025-03-20T02:31:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-19T11:46:15Z
--- library_name: transformers model_name: Qwen-2.5-7B-Simple-RL-length-penalty-low-medium-high tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-Simple-RL-length-penalty-low-medium-high This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="zijianh/Qwen-2.5-7B-Simple-RL-length-penalty-low-medium-high", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sota_mavens-university-of-michigan/huggingface/runs/u8ywq0pm) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Hunie07/gemma-3-4b-it-ko
Hunie07
2025-03-20T02:31:02Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-20T02:30:36Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Hunie07 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
redwiggler/gemma-3-4b-it-ko
redwiggler
2025-03-20T02:30:47Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-20T02:30:26Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** redwiggler - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_M-GGUF
netcat420
2025-03-20T02:29:08Z
0
0
null
[ "gguf", "merge", "mergekit", "lazymergekit", "huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated", "netcat420/qwen2.5-MFANN-7b-v1.2", "llama-cpp", "gguf-my-repo", "base_model:netcat420/qwen2.5-MFANN-7b-SLERP-V1.3", "base_model:quantized:netcat420/qwen2.5-MFANN-7b-SLERP-V1.3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-20T02:04:33Z
--- base_model: netcat420/qwen2.5-MFANN-7b-SLERP-V1.3 license: apache-2.0 tags: - merge - mergekit - lazymergekit - huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated - netcat420/qwen2.5-MFANN-7b-v1.2 - llama-cpp - gguf-my-repo --- # netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_M-GGUF This model was converted to GGUF format from [`netcat420/qwen2.5-MFANN-7b-SLERP-V1.3`](https://huggingface.co/netcat420/qwen2.5-MFANN-7b-SLERP-V1.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/netcat420/qwen2.5-MFANN-7b-SLERP-V1.3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_M-GGUF --hf-file qwen2.5-mfann-7b-slerp-v1.3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_M-GGUF --hf-file qwen2.5-mfann-7b-slerp-v1.3-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_M-GGUF --hf-file qwen2.5-mfann-7b-slerp-v1.3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_M-GGUF --hf-file qwen2.5-mfann-7b-slerp-v1.3-q4_k_m.gguf -c 2048 ```
ChiHieuNguyen/result
ChiHieuNguyen
2025-03-20T02:25:28Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:Salesforce/codet5-base", "base_model:adapter:Salesforce/codet5-base", "license:apache-2.0", "region:us" ]
null
2025-03-19T03:41:29Z
--- library_name: peft license: apache-2.0 base_model: Salesforce/codet5-base tags: - generated_from_trainer model-index: - name: result results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # result This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
c00cjz00/gemma-3-12b-it-R1-medical
c00cjz00
2025-03-20T02:24:39Z
0
0
transformers
[ "transformers", "gemma3", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-03-19T19:56:44Z
--- base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** c00cjz00 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
c00cjz00/gemma-3-4b-it-R1-medical
c00cjz00
2025-03-20T02:23:53Z
0
0
transformers
[ "transformers", "gemma3", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-03-19T17:20:38Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** c00cjz00 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mstojkov/policy-135-iter1
mstojkov
2025-03-20T02:22:30Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-20T02:22:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hykiim/results
hykiim
2025-03-20T02:19:33Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-20T01:50:34Z
--- library_name: transformers base_model: klue/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4754 - Accuracy: 0.855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5332 | 1.0 | 1250 | 0.5110 | 0.846 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.0 - Tokenizers 0.21.1
Jamiamonique/wav2vec2-large-xls-r-300m-dm32
Jamiamonique
2025-03-20T02:14:21Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "generated_from_trainer", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-02-04T01:29:00Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-large-xls-r-300m-dm32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-dm32 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5965 - Accuracy: 0.7292 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 22 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 8.5 | 34 | 0.6786 | 0.5417 | | No log | 17.0 | 68 | 0.5965 | 0.7292 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.0.1+cu117 - Datasets 3.4.1 - Tokenizers 0.21.1
maanasharma5/dialect-debiasing-gpt2-medium-pnlogmse-e1-r2_eval-n10.0
maanasharma5
2025-03-20T02:10:36Z
0
0
peft
[ "peft", "safetensors", "gpt2", "arxiv:1910.09700", "base_model:openai-community/gpt2-medium", "base_model:adapter:openai-community/gpt2-medium", "region:us" ]
null
2025-03-20T02:10:32Z
--- base_model: gpt2-medium library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
hirosuke/xlm-roberta-base-finetuned-panx-de
hirosuke
2025-03-20T02:10:16Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-03-19T13:35:02Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1376 - F1: 0.8644 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2579 | 1.0 | 525 | 0.1546 | 0.8179 | | 0.1283 | 2.0 | 1050 | 0.1378 | 0.8518 | | 0.0805 | 3.0 | 1575 | 0.1376 | 0.8644 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cpu - Datasets 3.3.2 - Tokenizers 0.20.3
netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_S-GGUF
netcat420
2025-03-20T02:07:40Z
0
0
null
[ "gguf", "merge", "mergekit", "lazymergekit", "huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated", "netcat420/qwen2.5-MFANN-7b-v1.2", "llama-cpp", "gguf-my-repo", "base_model:netcat420/qwen2.5-MFANN-7b-SLERP-V1.3", "base_model:quantized:netcat420/qwen2.5-MFANN-7b-SLERP-V1.3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-20T02:07:19Z
--- base_model: netcat420/qwen2.5-MFANN-7b-SLERP-V1.3 license: apache-2.0 tags: - merge - mergekit - lazymergekit - huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated - netcat420/qwen2.5-MFANN-7b-v1.2 - llama-cpp - gguf-my-repo --- # netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_S-GGUF This model was converted to GGUF format from [`netcat420/qwen2.5-MFANN-7b-SLERP-V1.3`](https://huggingface.co/netcat420/qwen2.5-MFANN-7b-SLERP-V1.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/netcat420/qwen2.5-MFANN-7b-SLERP-V1.3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_S-GGUF --hf-file qwen2.5-mfann-7b-slerp-v1.3-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_S-GGUF --hf-file qwen2.5-mfann-7b-slerp-v1.3-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_S-GGUF --hf-file qwen2.5-mfann-7b-slerp-v1.3-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo netcat420/qwen2.5-MFANN-7b-SLERP-V1.3-Q4_K_S-GGUF --hf-file qwen2.5-mfann-7b-slerp-v1.3-q4_k_s.gguf -c 2048 ```
kevin009/llama406
kevin009
2025-03-20T02:03:15Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-20T01:18:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sung429/detr-accident-detection
sung429
2025-03-20T02:02:51Z
8
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2025-03-18T06:12:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YugyeongJang/output4
YugyeongJang
2025-03-20T02:01:10Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-03-20T00:44:06Z
--- base_model: stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m inference: true instance_prompt: a photo of sks vase tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - YugyeongJang/output4 This is a dreambooth model derived from stable-diffusion-v1-5. The weights were trained on a photo of sks vase using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lululele/SmolLM2-FT-MyDataset
lululele
2025-03-20T02:01:06Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-26T16:04:50Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lululele/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nguyenhuyhoang0943-the-saigon-international-university/huggingface/runs/ckzjvtbb) This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wali-2121/v123
wali-2121
2025-03-20T02:00:32Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-20T02:00:32Z
--- license: apache-2.0 ---
Mrober55/Jjj
Mrober55
2025-03-20T01:58:08Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-03-20T01:56:47Z
--- license: artistic-2.0 ---
jerseyjerry/task-5-microsoft-Phi-3-mini-4k-instruct-0320
jerseyjerry
2025-03-20T01:57:28Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "license:other", "region:us" ]
null
2025-03-20T01:56:12Z
--- library_name: peft license: other base_model: microsoft/Phi-3-mini-4k-instruct tags: - llama-factory - lora - generated_from_trainer model-index: - name: lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the flock_task5_train dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - total_eval_batch_size: 2 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.12.0 - Transformers 4.49.0 - Pytorch 2.6.0+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
lesso05/0befb973-0bc9-4f06-ae5f-ab32f5900322
lesso05
2025-03-20T01:55:53Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored", "base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored", "license:llama3", "region:us" ]
null
2025-03-19T23:06:27Z
--- library_name: peft license: llama3 base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored tags: - axolotl - generated_from_trainer model-index: - name: 0befb973-0bc9-4f06-ae5f-ab32f5900322 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 531cb107a136231e_train_data.json ds_type: json format: custom path: /workspace/input_data/531cb107a136231e_train_data.json type: field_input: prompt field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso05/0befb973-0bc9-4f06-ae5f-ab32f5900322 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000205 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/531cb107a136231e_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 50 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: fef42bae-297d-41f8-aa7a-ca914a0305c4 wandb_project: 05a wandb_run: your_name wandb_runid: fef42bae-297d-41f8-aa7a-ca914a0305c4 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 0befb973-0bc9-4f06-ae5f-ab32f5900322 This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000205 - train_batch_size: 4 - eval_batch_size: 4 - seed: 50 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0007 | 1 | 0.6535 | | 0.3965 | 0.3378 | 500 | 0.3917 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
YDluffy/lottery_prediction
YDluffy
2025-03-20T01:54:34Z
0
0
xgboost
[ "xgboost", "lottery_prediction", "machine-learning", "huggingface-hub", "tabular-classification", "license:apache-2.0", "region:us" ]
tabular-classification
2025-03-18T17:12:12Z
--- library_name: xgboost tags: - xgboost - lottery_prediction - machine-learning - huggingface-hub license: apache-2.0 datasets: [] language: [] metrics: [] base_model: [] pipeline_tag: tabular-classification --- # 🎯 六合彩预测模型 - Lottery Prediction Model 该模型使用 **XGBoost** 进行训练,用于预测 **六合彩开奖号码**。 ## 📌 使用方法 你可以在 Python 中使用 Hugging Face API 下载和加载模型: ```python from huggingface_hub import hf_hub_download import xgboost as xgb # **📥 下载模型** repo_id = "YDluffy/lottery_prediction" model_filename = "lottery_xgboost_model.json" model_path = hf_hub_download(repo_id=repo_id, filename=model_filename) # **✅ 加载 XGBoost 预测模型** model = xgb.Booster() model.load_model(model_path) print("✅ 模型加载成功!")
MSHADroo/sd-qassem-unet-custom-train
MSHADroo
2025-03-20T01:54:06Z
5
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion-v1-5", "text-to-image", "diffusers-training", "en", "dataset:MSHADroo/dml_task_1", "base_model:stable-diffusion-v1-5/stable-diffusion-v1-5", "base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
text-to-image
2025-03-18T14:47:35Z
--- datasets: - MSHADroo/dml_task_1 language: - en base_model: - stable-diffusion-v1-5/stable-diffusion-v1-5 pipeline_tag: text-to-image library_name: diffusers tags: - stable-diffusion-v1-5 - text-to-image - diffusers - diffusers-training - text-to-image - diffusers - diffusers-training license: apache-2.0 --- this model is unet fine tuned model of stable diffusion architecture based on stable-diffusion-v1-5
ikiransuryavanshi/layoutlmv3-ap7_1_ip
ikiransuryavanshi
2025-03-20T01:53:59Z
0
0
transformers
[ "transformers", "safetensors", "layoutlmv3", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-03-20T01:37:34Z
--- library_name: transformers license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-ap7_1_ip results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-ap7_1_ip This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0246 - Precision: 0.7719 - Recall: 0.8302 - F1: 0.8 - Accuracy: 0.9961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.5176 | 14.7059 | 250 | 0.0664 | 0.45 | 0.1698 | 0.2466 | 0.9881 | | 0.0288 | 29.4118 | 500 | 0.0354 | 0.6538 | 0.6415 | 0.6476 | 0.9930 | | 0.0143 | 44.1176 | 750 | 0.0269 | 0.7333 | 0.8302 | 0.7788 | 0.9956 | | 0.0098 | 58.8235 | 1000 | 0.0246 | 0.7719 | 0.8302 | 0.8 | 0.9961 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
Zorro123444/invoice_extracter_5
Zorro123444
2025-03-20T01:52:21Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-20T01:04:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mdsingh2024/ap-dnkfRpBaAiC87xjXEDoBy0
mdsingh2024
2025-03-20T01:48:54Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-03-19T20:42:24Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer metrics: - wer model-index: - name: ap-dnkfRpBaAiC87xjXEDoBy0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ap-dnkfRpBaAiC87xjXEDoBy0 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3976 - Model Preparation Time: 0.0221 - Wer: 0.1086 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Wer | |:-------------:|:------:|:----:|:---------------:|:----------------------:|:------:| | 0.3416 | 0.9791 | 41 | 0.3450 | 0.0221 | 0.1210 | | 0.2234 | 1.9791 | 82 | 0.2593 | 0.0221 | 0.1044 | | 0.1546 | 2.9791 | 123 | 0.2602 | 0.0221 | 0.1020 | | 0.08 | 3.9791 | 164 | 0.2776 | 0.0221 | 0.1018 | | 0.0512 | 4.9791 | 205 | 0.3098 | 0.0221 | 0.1080 | | 0.0392 | 5.9791 | 246 | 0.3241 | 0.0221 | 0.1087 | | 0.0275 | 6.9791 | 287 | 0.3662 | 0.0221 | 0.1052 | | 0.0267 | 7.9791 | 328 | 0.3335 | 0.0221 | 0.1348 | | 0.0262 | 8.9791 | 369 | 0.3621 | 0.0221 | 0.1101 | | 0.0176 | 9.9791 | 410 | 0.3976 | 0.0221 | 0.1086 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.1
SachiFaker/sd-class-butterflies-32
SachiFaker
2025-03-20T01:42:39Z
0
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2025-03-20T01:41:56Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('SachiFaker/sd-class-butterflies-32') image = pipeline().images[0] image ```
UICHEOL-HWANG/GreenFinance-Llama-3-ko-8B
UICHEOL-HWANG
2025-03-20T01:42:04Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-19T10:10:23Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mshen2/qwen2.5-math-7b-v4-no-hcot
mshen2
2025-03-20T01:40:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-20T01:37:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kalos78237/Kalos
Kalos78237
2025-03-20T01:38:14Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-03-20T01:38:13Z
--- license: bigcode-openrail-m ---
tomitaln/Qwen2.5
tomitaln
2025-03-20T01:35:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2025-03-14T01:05:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
whywwhy/results
whywwhy
2025-03-20T01:35:39Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-20T01:34:57Z
--- library_name: transformers base_model: klue/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4458 - Accuracy: 0.862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5332 | 1.0 | 1250 | 0.5193 | 0.839 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.0 - Tokenizers 0.21.1
gaokerena/gaokerena
gaokerena
2025-03-20T01:35:25Z
0
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-20T01:23:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qpeterp/results
qpeterp
2025-03-20T01:35:00Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-20T01:34:11Z
--- library_name: transformers base_model: klue/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4681 - Accuracy: 0.853 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5331 | 1.0 | 1250 | 0.5297 | 0.841 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.0 - Tokenizers 0.21.1
netcat420/qwen2.5-MFANN-7b-SLERP-V1.3
netcat420
2025-03-20T01:34:59Z
0
0
null
[ "safetensors", "qwen2", "merge", "mergekit", "lazymergekit", "huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated", "netcat420/qwen2.5-MFANN-7b-v1.2", "license:apache-2.0", "region:us" ]
null
2025-03-20T01:31:26Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated - netcat420/qwen2.5-MFANN-7b-v1.2 --- # qwen2.5-MFANN-7b-SLERP-V1.3 qwen2.5-MFANN-7b-SLERP-V1.3 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated) * [netcat420/qwen2.5-MFANN-7b-v1.2](https://huggingface.co/netcat420/qwen2.5-MFANN-7b-v1.2) ## 🧩 Configuration ```yaml slices: - sources: - model: huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated layer_range: [0, 28] - model: netcat420/qwen2.5-MFANN-7b-v1.2 layer_range: [0, 28] merge_method: slerp base_model: huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: float16 ```
geol-dgsw/results
geol-dgsw
2025-03-20T01:34:41Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-20T01:34:16Z
--- library_name: transformers base_model: klue/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4579 - Accuracy: 0.849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5391 | 1.0 | 1250 | 0.5395 | 0.835 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Tokenizers 0.21.1
DevQuasar/rinna.gemma-2-baku-2b-GGUF
DevQuasar
2025-03-20T01:31:56Z
0
0
null
[ "gguf", "text-generation", "base_model:rinna/gemma-2-baku-2b", "base_model:quantized:rinna/gemma-2-baku-2b", "endpoints_compatible", "region:us" ]
text-generation
2025-03-20T01:18:42Z
--- base_model: - rinna/gemma-2-baku-2b pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [rinna/gemma-2-baku-2b](https://huggingface.co/rinna/gemma-2-baku-2b) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
jiinking/19_bitwise_MQA_llama3B_model
jiinking
2025-03-20T01:31:42Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-20T00:19:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/purebreed-v1.2-GGUF
mradermacher
2025-03-20T01:30:03Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:trbiv/purebreed-v1.2", "base_model:quantized:trbiv/purebreed-v1.2", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-20T01:09:40Z
--- base_model: trbiv/purebreed-v1.2 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/trbiv/purebreed-v1.2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/purebreed-v1.2-GGUF/resolve/main/purebreed-v1.2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
fukugawa/gemma-2-9b-finetuned
fukugawa
2025-03-20T01:29:51Z
0
0
transformers
[ "transformers", "safetensors", "dataset:fukugawa/kamakura-tasks-100", "license:gemma", "endpoints_compatible", "region:us" ]
null
2024-12-12T02:46:38Z
--- library_name: transformers datasets: - fukugawa/kamakura-tasks-100 license: gemma --- ## Overview このモデルは、「[gemma-2-9b](https://huggingface.co/google/gemma-2-9b)」に対して、データセット「[kamakura-tasks-100](https://huggingface.co/datasets/fukugawa/kamakura-tasks-100)」の100件を用いてファインチューニングを実施し、指示応答できるようにしました。 ## Demo このモデルを使ったChatbotのデモをspaces上に公開しています。 * [Chatbotデモ](https://huggingface.co/spaces/fukugawa/gemma-2-9b-finetuned) ## Blog Post * [自作データセットによるGemma2-9Bのファインチューニング](https://matsuolab-geniac.notion.site/Gemma2-9B-fukugawa-d2c52f881d324c6fbc37febe3d30d0c0) ## Usage 以下は、ELYZA-tasks-100-TV(100問)の回答を生成する推論コードです。 #### Requirements: ```bash # python 3.10 pip install -U transformers pip install -U accelerate pip install -U peft ``` 「[gemma-2-9b](https://huggingface.co/google/gemma-2-9b)」を利用するには、HFにログインし、利用規約に同意する必要があります。以下のコマンドでログインしてください(Notebookではfrom_pretrained()のtoken引数でも可)。 ```bash huggingface-cli login ``` #### Inference: ~~~~python import json import torch from datasets import Dataset from tqdm import tqdm from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "fukugawa/gemma-2-9b-finetuned" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16) datasets = Dataset.from_json("./elyza-tasks-100-TV_0.jsonl") results = [] for data in tqdm(datasets): input = data["input"] prompt = f"### 指示\n{input}\n### 回答\n" tokenized_input = tokenizer.encode(prompt, return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model.generate( tokenized_input, max_new_tokens=512, do_sample=False, )[0] output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True) results.append({"task_id": data["task_id"], "input": input, "output": output}) with open("./outputs.jsonl", 'w', encoding='utf-8') as f: for result in results: json.dump(result, f, ensure_ascii=False) f.write('\n') ~~~~ ELYZAタスクTVのJSONLファイル(elyza-tasks-100-TV_0.jsonl)が必要です。 推論時に18〜19GBのGPUメモリが必要になります。Nvidia L4 24GBメモリで動作確認しています。 100問の推論時間は約15〜20分程です。 カレントディレクトリにoutputs.jsonlが出力されます。 ## Dataset * [kamakura-tasks-100](https://huggingface.co/datasets/fukugawa/kamakura-tasks-100)
stacklok/Qwen2.5-Coder-7B-Instruct-reactjs-chat
stacklok
2025-03-20T01:29:35Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "qwen2", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-03-20T01:23:02Z
--- base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** stacklok - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yoonssun07/results
yoonssun07
2025-03-20T01:29:26Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-20T01:28:11Z
--- library_name: transformers base_model: klue/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4393 - Accuracy: 0.864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5321 | 1.0 | 1250 | 0.5129 | 0.846 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.0 - Tokenizers 0.21.1
lesso07/a11480e4-b97f-4323-9e65-11f58ad10a2d
lesso07
2025-03-20T01:28:02Z
0
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "license:mit", "region:us" ]
null
2025-03-19T23:21:04Z
--- library_name: peft license: mit base_model: microsoft/Phi-3.5-mini-instruct tags: - axolotl - generated_from_trainer model-index: - name: a11480e4-b97f-4323-9e65-11f58ad10a2d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: microsoft/Phi-3.5-mini-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - d848c58d64aeb958_train_data.json ds_type: json format: custom path: /workspace/input_data/d848c58d64aeb958_train_data.json type: field_input: documents field_instruction: question field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso07/a11480e4-b97f-4323-9e65-11f58ad10a2d hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000207 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/d848c58d64aeb958_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 70 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 4a21d2e2-6d81-4cab-b09f-1870a1ec35b4 wandb_project: 07a wandb_run: your_name wandb_runid: 4a21d2e2-6d81-4cab-b09f-1870a1ec35b4 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a11480e4-b97f-4323-9e65-11f58ad10a2d This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000207 - train_batch_size: 4 - eval_batch_size: 4 - seed: 70 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0009 | 1 | 1.5209 | | 7.87 | 0.4270 | 500 | 0.9824 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1