modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 06:27:53
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
519 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 06:27:45
card
stringlengths
11
1.01M
aajonaed/ai
aajonaed
2025-04-23T05:03:55Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-04-23T05:03:55Z
--- license: bigscience-openrail-m ---
nldoz/gemma3-27b
nldoz
2025-04-23T05:03:48Z
0
0
null
[ "gguf", "base_model:google/gemma-3-27b-it-qat-q4_0-gguf", "base_model:quantized:google/gemma-3-27b-it-qat-q4_0-gguf", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-23T05:03:02Z
--- license: gemma metrics: - perplexity base_model: - google/gemma-3-27b-it-qat-q4_0-gguf --- Backup of https://huggingface.co/stduhpf/google-gemma-3-27b-it-qat-q4_0-gguf-small quants made by stduhpf Fantastic performance!
MatrixYao/swin-tiny-patch4-window7-224-finetuned-eurosat
MatrixYao
2025-04-23T04:55:16Z
0
0
transformers
[ "transformers", "safetensors", "swin", "image-classification", "generated_from_trainer", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-04-23T04:50:01Z
--- library_name: transformers license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0492 - Accuracy: 0.9830 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2157 | 1.0 | 190 | 0.0875 | 0.9704 | | 0.1528 | 2.0 | 380 | 0.0713 | 0.9748 | | 0.0861 | 3.0 | 570 | 0.0492 | 0.9830 | ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+xpu - Datasets 3.5.0 - Tokenizers 0.21.1
jonathanyin/qwen2.5-7b_grok-3-mini-high_traces-20250423_044329
jonathanyin
2025-04-23T04:53:39Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T04:44:11Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: qwen2.5-7b_grok-3-mini-high_traces-20250423_044329 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-7b_grok-3-mini-high_traces-20250423_044329 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jonathanyin/qwen2.5-7b_grok-3-mini-high_traces-20250423_044329", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jonathanyin-yale/LLM%20Reasoning/runs/2ijjl4f9) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF
mradermacher
2025-04-23T04:52:36Z
0
0
transformers
[ "transformers", "gguf", "en", "dataset:LLM-EDA/pyra_tb", "base_model:LLM-EDA/VeriPrefer-Qwen2.5-Coder-7B", "base_model:quantized:LLM-EDA/VeriPrefer-Qwen2.5-Coder-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-23T04:33:37Z
--- base_model: LLM-EDA/VeriPrefer-Qwen2.5-Coder-7B datasets: - LLM-EDA/pyra_tb language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/LLM-EDA/VeriPrefer-Qwen2.5-Coder-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/VeriPrefer-Qwen2.5-Coder-7B-GGUF/resolve/main/VeriPrefer-Qwen2.5-Coder-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MinaMila/gemma2_2b_unlearned_LoRa_Adult_ep6_22
MinaMila
2025-04-23T04:48:08Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-23T04:48:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hafidev/bert-base-uncased-coordinating-conjunctions-disfluency-detection-beta-v1
hafidev
2025-04-23T04:46:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-04-23T04:45:05Z
--- library_name: transformers license: apache-2.0 base_model: google-bert/bert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: bert-base-uncased-coordinating-conjunctions-disfluency-detection-beta-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-coordinating-conjunctions-disfluency-detection-beta-v1 This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0146 - Model Preparation Time: 0.0032 - Accuracy: 0.9961 - Precision: 0.9271 - Recall: 0.9563 - F1: 0.9415 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:----------------------:|:--------:|:---------:|:------:|:------:| | 0.1001 | 1.0 | 209 | 0.0146 | 0.0032 | 0.9951 | 0.8980 | 0.9612 | 0.9285 | | 0.0136 | 2.0 | 418 | 0.0137 | 0.0032 | 0.9958 | 0.9145 | 0.9612 | 0.9373 | | 0.0099 | 3.0 | 627 | 0.0133 | 0.0032 | 0.9959 | 0.9267 | 0.9515 | 0.9389 | | 0.0074 | 4.0 | 836 | 0.0142 | 0.0032 | 0.9958 | 0.9306 | 0.9442 | 0.9373 | | 0.006 | 5.0 | 1045 | 0.0146 | 0.0032 | 0.9961 | 0.9271 | 0.9563 | 0.9415 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
prithivMLmods/Multilabel-GeoSceneNet
prithivMLmods
2025-04-23T04:44:41Z
0
0
transformers
[ "transformers", "safetensors", "siglip", "image-classification", "Structures", "Desert", "Glacier", "Street", "Ocean", "Image-Classifier", "art", "Mountain", "en", "dataset:prithivMLmods/Multilabel-GeoSceneNet-16K", "base_model:google/siglip2-base-patch16-224", "base_model:finetune:google/siglip2-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-04-22T19:07:41Z
--- license: apache-2.0 datasets: - prithivMLmods/Multilabel-GeoSceneNet-16K library_name: transformers language: - en base_model: - google/siglip2-base-patch16-224 pipeline_tag: image-classification tags: - Structures - Desert - Glacier - Street - Ocean - Image-Classifier - art - Mountain --- ![DCV.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/b3meMjfW6qOwWkuE-UCKQ.png) # **Multilabel-GeoSceneNet** > **Multilabel-GeoSceneNet** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **multi-label** image classification. It is designed to recognize and label multiple geographic or environmental elements in a single image using the **SiglipForImageClassification** architecture. ```py Classification Report: precision recall f1-score support Buildings and Structures 0.8881 0.9498 0.9179 2190 Desert 0.9649 0.9480 0.9564 2000 Forest Area 0.9807 0.9855 0.9831 2271 Hill or Mountain 0.8616 0.8993 0.8800 2512 Ice Glacier 0.9114 0.8382 0.8732 2404 Sea or Ocean 0.9328 0.9525 0.9426 2274 Street View 0.9476 0.9106 0.9287 2382 accuracy 0.9245 16033 macro avg 0.9267 0.9263 0.9260 16033 weighted avg 0.9253 0.9245 0.9244 16033 ``` ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Ld-vFb2MWg43wAG5pyFZb.png) --- The model predicts the presence of one or more of the following **7 geographic scene categories**: ``` Class 0: "Buildings and Structures" Class 1: "Desert" Class 2: "Forest Area" Class 3: "Hill or Mountain" Class 4: "Ice Glacier" Class 5: "Sea or Ocean" Class 6: "Street View" ``` --- ## **Install dependencies** ```python !pip install -q transformers torch pillow gradio ``` --- ## **Inference Code** ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Multilabel-GeoSceneNet" # Updated model name model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) def classify_geoscene_image(image): """Predicts geographic scene labels for an input image.""" image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.sigmoid(logits).squeeze().tolist() # Sigmoid for multilabel labels = { "0": "Buildings and Structures", "1": "Desert", "2": "Forest Area", "3": "Hill or Mountain", "4": "Ice Glacier", "5": "Sea or Ocean", "6": "Street View" } threshold = 0.5 predictions = { labels[str(i)]: round(probs[i], 3) for i in range(len(probs)) if probs[i] >= threshold } return predictions or {"None Detected": 0.0} # Create Gradio interface iface = gr.Interface( fn=classify_geoscene_image, inputs=gr.Image(type="numpy"), outputs=gr.Label(label="Predicted Scene Categories"), title="Multilabel-GeoSceneNet", description="Upload an image to detect multiple geographic scene elements (e.g., forest, ocean, buildings)." ) if __name__ == "__main__": iface.launch() ``` --- ## **Intended Use:** The **Multilabel-GeoSceneNet** model is suitable for recognizing multiple geographic and structural elements in a single image. Use cases include: - **Remote Sensing:** Label elements in satellite or drone imagery. - **Geographic Tagging:** Auto-tagging images for search or sorting. - **Environmental Monitoring:** Identify features like glaciers or forests. - **Scene Understanding:** Help autonomous systems interpret complex scenes.
KaushikSahoo/q-taxi
KaushikSahoo
2025-04-23T04:43:31Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-04-23T04:43:29Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.40 +/- 2.68 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="KaushikSahoo/q-taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
shashankheg/my_qa_model
shashankheg
2025-04-23T04:41:13Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2025-04-23T00:06:10Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: my_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_qa_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.4106 | | 2.7898 | 2.0 | 500 | 1.7715 | | 2.7898 | 3.0 | 750 | 1.6448 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cpu - Datasets 3.3.2 - Tokenizers 0.21.0
hirubyyyy/model_small
hirubyyyy
2025-04-23T04:39:12Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-23T04:36:24Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hartunka/distilbert_rand_100_v2_mnli
Hartunka
2025-04-23T04:39:10Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_rand_100_v2", "base_model:finetune:Hartunka/distilbert_rand_100_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-23T03:44:33Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_rand_100_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_rand_100_v2_mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.6670056956875509 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_rand_100_v2_mnli This model is a fine-tuned version of [Hartunka/distilbert_rand_100_v2](https://huggingface.co/Hartunka/distilbert_rand_100_v2) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.7785 - Accuracy: 0.6670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9746 | 1.0 | 1534 | 0.8963 | 0.5843 | | 0.8662 | 2.0 | 3068 | 0.8369 | 0.6220 | | 0.7741 | 3.0 | 4602 | 0.7931 | 0.6513 | | 0.6996 | 4.0 | 6136 | 0.7794 | 0.6646 | | 0.6337 | 5.0 | 7670 | 0.7858 | 0.6726 | | 0.5673 | 6.0 | 9204 | 0.8410 | 0.6655 | | 0.5012 | 7.0 | 10738 | 0.8852 | 0.6696 | | 0.439 | 8.0 | 12272 | 0.9909 | 0.6619 | | 0.3799 | 9.0 | 13806 | 1.1054 | 0.6647 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
DamienZ/fortunetelling
DamienZ
2025-04-23T04:37:19Z
0
0
null
[ "gguf", "llama", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-23T03:00:22Z
--- license: apache-2.0 ---
hZzy/mistral-7b-expo-7b-DPO-25-last-2
hZzy
2025-04-23T04:36:21Z
0
0
peft
[ "peft", "safetensors", "mistral", "alignment-handbook", "ndcg", "trl", "expo", "generated_from_trainer", "dataset:hZzy/direction_right2", "base_model:hZzy/mistral-7b-sft-25-1", "base_model:adapter:hZzy/mistral-7b-sft-25-1", "license:apache-2.0", "region:us" ]
null
2025-04-22T21:47:10Z
--- base_model: hZzy/mistral-7b-sft-25-1 datasets: - hZzy/direction_right2 library_name: peft license: apache-2.0 tags: - alignment-handbook - ndcg - trl - expo - generated_from_trainer model-index: - name: mistral-7b-expo-7b-DPO-25-last-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/3me07br1) # mistral-7b-expo-7b-DPO-25-last-2 This model is a fine-tuned version of [hZzy/mistral-7b-sft-25-1](https://huggingface.co/hZzy/mistral-7b-sft-25-1) on the hZzy/direction_right2 dataset. It achieves the following results on the evaluation set: - Loss: 0.6090 - Objective: 0.6216 - Reward Accuracy: 0.6622 - Logp Accuracy: 0.6317 - Log Diff Policy: 9.5533 - Chosen Logps: -124.4338 - Rejected Logps: -133.9871 - Chosen Rewards: -1.4876 - Rejected Rewards: -1.9461 - Logits: -2.1433 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - gradient_accumulation_steps: 12 - total_train_batch_size: 108 - total_eval_batch_size: 9 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Objective | Reward Accuracy | Logp Accuracy | Log Diff Policy | Chosen Logps | Rejected Logps | Chosen Rewards | Rejected Rewards | Logits | |:-------------:|:------:|:----:|:---------------:|:---------:|:---------------:|:-------------:|:---------------:|:------------:|:--------------:|:--------------:|:----------------:|:-------:| | 0.6912 | 0.0758 | 50 | 0.6916 | 0.6916 | 0.5453 | 0.5165 | 0.4536 | -93.0364 | -93.4900 | 0.0823 | 0.0787 | -2.1999 | | 0.6816 | 0.1517 | 100 | 0.6839 | 0.6835 | 0.5682 | 0.5268 | 0.8519 | -92.7664 | -93.6183 | 0.0958 | 0.0723 | -2.2118 | | 0.6682 | 0.2275 | 150 | 0.6681 | 0.6699 | 0.5898 | 0.5391 | 1.8524 | -90.8783 | -92.7307 | 0.1902 | 0.1167 | -2.1819 | | 0.6368 | 0.3033 | 200 | 0.6511 | 0.6563 | 0.6099 | 0.5663 | 3.4226 | -95.6494 | -99.0720 | -0.0484 | -0.2004 | -2.1272 | | 0.6147 | 0.3792 | 250 | 0.6438 | 0.6524 | 0.6230 | 0.5800 | 4.8890 | -106.6454 | -111.5345 | -0.5982 | -0.8235 | -2.1329 | | 0.6198 | 0.4550 | 300 | 0.6340 | 0.6433 | 0.6323 | 0.5979 | 5.6663 | -101.5432 | -107.2095 | -0.3430 | -0.6073 | -2.1344 | | 0.5733 | 0.5308 | 350 | 0.6317 | 0.6468 | 0.6401 | 0.6119 | 7.1300 | -111.3464 | -118.4764 | -0.8332 | -1.1706 | -2.1173 | | 0.5624 | 0.6067 | 400 | 0.6249 | 0.6382 | 0.6477 | 0.6099 | 7.5147 | -99.6262 | -107.1409 | -0.2472 | -0.6038 | -2.1473 | | 0.5804 | 0.6825 | 450 | 0.6201 | 0.6366 | 0.6490 | 0.6270 | 8.6628 | -115.1139 | -123.7767 | -1.0216 | -1.4356 | -2.1786 | | 0.5731 | 0.7583 | 500 | 0.6207 | 0.6348 | 0.6544 | 0.6326 | 9.4271 | -112.7352 | -122.1623 | -0.9026 | -1.3549 | -2.1662 | | 0.5566 | 0.8342 | 550 | 0.6190 | 0.6309 | 0.6549 | 0.6342 | 9.5743 | -120.8242 | -130.3985 | -1.3071 | -1.7667 | -2.1212 | | 0.5574 | 0.9100 | 600 | 0.6111 | 0.6232 | 0.6647 | 0.6367 | 9.6758 | -128.2970 | -137.9727 | -1.6807 | -2.1454 | -2.1279 | | 0.5855 | 0.9858 | 650 | 0.6065 | 0.6193 | 0.6672 | 0.6342 | 9.7652 | -122.9088 | -132.6740 | -1.4113 | -1.8805 | -2.1149 | ### Framework versions - PEFT 0.11.1 - Transformers 4.42.0 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.19.1
zhouxiangxin/7e3d0b5c758790597887d7f728abdd187aa29996f40db7a3a82e5d0379a08ce5
zhouxiangxin
2025-04-23T04:34:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T04:21:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nithin0029/q-FrozenLake-v1-4x4-noSlippery
nithin0029
2025-04-23T04:33:16Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-04-23T04:33:13Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="nithin0029/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
kostiantynk-outlook/47179764-c630-41a8-b5f2-3232d813b7eb
kostiantynk-outlook
2025-04-23T04:32:00Z
0
0
transformers
[ "transformers", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-04-23T04:31:29Z
--- library_name: transformers model_name: kostiantynk-outlook/47179764-c630-41a8-b5f2-3232d813b7eb tags: - generated_from_trainer licence: license --- # Model Card for kostiantynk-outlook/47179764-c630-41a8-b5f2-3232d813b7eb This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jonathanyin/qwen2.5-7b_grok-3-mini-high_traces-20250423_041357
jonathanyin
2025-04-23T04:28:05Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T04:15:01Z
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: transformers model_name: qwen2.5-7b_grok-3-mini-high_traces-20250423_041357 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2.5-7b_grok-3-mini-high_traces-20250423_041357 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jonathanyin/qwen2.5-7b_grok-3-mini-high_traces-20250423_041357", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jacobcd52/gemma-2-9b-it_old_cars_142
jacobcd52
2025-04-23T04:28:00Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma2", "trl", "en", "base_model:unsloth/gemma-2-9b-it", "base_model:finetune:unsloth/gemma-2-9b-it", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T15:58:15Z
--- base_model: unsloth/gemma-2-9b-it tags: - text-generation-inference - transformers - unsloth - gemma2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** jacobcd52 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-9b-it This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TOMFORD79/Hzen_24
TOMFORD79
2025-04-23T04:26:50Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T03:57:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IsshikiHugh/HSMR-data_inputs
IsshikiHugh
2025-04-23T04:26:24Z
0
1
null
[ "3DV", "HMR", "arxiv:2503.21751", "license:mit", "region:us" ]
null
2025-02-12T07:00:58Z
--- license: mit tags: - 3DV - HMR --- # Reconstructing Humans with a Biomechanically Accurate Skeleton Official Huggingface dependency files of [HSMR](https://arxiv.org/abs/2503.21751). Please refer to the [GitHub Repo](https://github.com/IsshikiHugh/HSMR) for the code release and more details. This Huggingface space is only for demo purposes.
TOMFORD79/Hzen_23
TOMFORD79
2025-04-23T04:26:23Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T03:57:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sindhusatish97/SFT
sindhusatish97
2025-04-23T04:22:58Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-23T04:22:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sindhusatish97/cs297
sindhusatish97
2025-04-23T04:22:51Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "endpoints_compatible", "region:us" ]
null
2025-04-23T04:22:39Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B library_name: transformers model_name: cs297 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for cs297 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sindhusatish97/cs297", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TOMFORD79/Hzen_19
TOMFORD79
2025-04-23T04:21:45Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T03:57:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/llama-33b-hf-GGUF
mradermacher
2025-04-23T04:21:32Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:pinkmanlove/llama-33b-hf", "base_model:quantized:pinkmanlove/llama-33b-hf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T18:48:36Z
--- base_model: pinkmanlove/llama-33b-hf language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/pinkmanlove/llama-33b-hf <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-33b-hf-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q2_K.gguf) | Q2_K | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q3_K_S.gguf) | Q3_K_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q3_K_M.gguf) | Q3_K_M | 15.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q3_K_L.gguf) | Q3_K_L | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.IQ4_XS.gguf) | IQ4_XS | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q4_K_S.gguf) | Q4_K_S | 18.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q4_K_M.gguf) | Q4_K_M | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q5_K_S.gguf) | Q5_K_S | 22.5 | | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q5_K_M.gguf) | Q5_K_M | 23.1 | | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q6_K.gguf) | Q6_K | 26.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-33b-hf-GGUF/resolve/main/llama-33b-hf.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
TOMFORD79/Hzen_18
TOMFORD79
2025-04-23T04:21:26Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T03:57:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
matrixportal/Yukselis-Lora
matrixportal
2025-04-23T04:19:46Z
0
0
peft
[ "peft", "llama", "generated_from_trainer", "base_model:matrixportal/Metafor", "base_model:adapter:matrixportal/Metafor", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-23T04:18:41Z
--- library_name: peft license: apache-2.0 base_model: matrixportal/Metafor tags: - generated_from_trainer model-index: - name: lora-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.5.0` ```yaml adapter: qlora base_model: matrixportal/Metafor bf16: true dataset_prepared_path: last_run_prepared datasets: - path: matrixportal/aya_dataset_alpaca type: alpaca debug: null deepspeed: null early_stopping_patience: null eval_sample_packing: true eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true group_by_length: false learning_rate: 2e-5 load_in_4bit: true load_in_8bit: false logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_modules_to_save: - embed_tokens - lm_head lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 1 num_epochs: 1 optimizer: paged_adamw_8bit output_dir: lora-out pad_to_sequence_len: true resume_from_checkpoint: null sample_packing: true saves_per_epoch: 1 sdp_attention: true sequence_len: 2048 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false train_on_inputs: false val_set_size: 0.05 wandb_entity: null wandb_log_model: null wandb_name: null wandb_project: null wandb_watch: null warmup_steps: 1 weight_decay: 0.0 xformers_attention: null ``` </details><br> # lora-out This model is a fine-tuned version of [matrixportal/Metafor](https://huggingface.co/matrixportal/Metafor) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.0384 | 0.0015 | 25 | 3.4000 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.3.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.3
thviet79/Vistral_med_model
thviet79
2025-04-23T04:18:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-23T04:16:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
m-aliabbas1/opu
m-aliabbas1
2025-04-23T04:10:44Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T04:04:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
msp5382/gemma-3-12b-capstone
msp5382
2025-04-23T04:05:20Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-23T04:05:00Z
--- base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** msp5382 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
agurung/Qwen2.5-3B-DumbSFTCompletion
agurung
2025-04-23T04:02:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:none", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T13:06:23Z
--- base_model: Qwen/Qwen2.5-3B-Instruct datasets: none library_name: transformers model_name: Qwen2.5-3B-DumbSFTCompletion tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen2.5-3B-DumbSFTCompletion This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [none](https://huggingface.co/datasets/none) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="agurung/Qwen2.5-3B-DumbSFTCompletion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alexgurung/babilong-sft/runs/o376jyqo) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Milian/TempSFTPrivate
Milian
2025-04-23T03:58:24Z
0
0
peft
[ "peft", "safetensors", "qwen2", "arxiv:1910.09700", "base_model:Milian/TempSFTPrivate", "base_model:adapter:Milian/TempSFTPrivate", "region:us" ]
null
2025-04-23T03:52:36Z
--- base_model: Milian/TempSFTPrivate library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
lushwhale/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_durable_grouse
lushwhale
2025-04-23T03:57:25Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am grassy durable grouse", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-21T09:21:21Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_durable_grouse tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am grassy durable grouse - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_durable_grouse This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lushwhale/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_durable_grouse", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TOMFORD79/Run15
TOMFORD79
2025-04-23T03:56:20Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T03:41:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TOMFORD79/Run12
TOMFORD79
2025-04-23T03:55:45Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T03:41:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TOMFORD79/Run10
TOMFORD79
2025-04-23T03:54:43Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T03:41:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hanyaHanan/gemma-3-AccelistFT
hanyaHanan
2025-04-23T03:54:35Z
2
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-14T02:40:44Z
--- base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** hanyaHanan - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
eming/SDSC6001_proj
eming
2025-04-23T00:37:40Z
0
0
null
[ "region:us" ]
null
2025-04-09T13:34:28Z
# SDSC6001 Project Implementation Code (Group 31) Implement a hybrid recommendation system combining traditional rating-based item-based collaborative filtering with image and description similarity. The main steps are as follows: 1. **Load and preprocess data** The dataset contains users (`user_id`), items (`asin`), and ratings (`rating`). We construct a user-item rating pivot table and compute item-to-item collaborative filtering similarity (e.g., cosine similarity). 2. **Multimodal similarity search with Milvus** Assume item metadata includes image links (`imageURLHighRes`) and textual descriptions (`description`). We use the Milvus vector database to provide two types of similarity queries for each `asin`: image similarity and description similarity. These queries typically require: - Extracting image and description feature vectors for the target `asin` - Querying Milvus for the most similar items to the target vector (The code below uses pseudocode for Milvus queries; in practice, implement with the Milvus Python SDK.) 3. **Hybrid similarity: constructing a fusion function** For any two items, we have: - Rating-based similarity (from collaborative filtering) - Image similarity - Description similarity Define a fusion function, e.g., weighted sum: ``` hybrid_score = w_rating * rating_sim + w_image * image_sim + w_desc * desc_sim ``` The weights can be tuned on a validation set, e.g., `w_rating=0.6`, `w_image=0.2`, `w_desc=0.2`. 4. **Generate recommendations for users** For a given user, first find all items they have rated. For each candidate item not yet rated, compute an aggregate hybrid similarity score with all items the user has rated, then rank candidates by score to produce the recommendation list.
Hartunka/distilbert_rand_50_v2_cola
Hartunka
2025-04-23T00:37:17Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_rand_50_v2", "base_model:finetune:Hartunka/distilbert_rand_50_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-23T00:36:09Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_rand_50_v2 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation - accuracy model-index: - name: distilbert_rand_50_v2_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 - name: Accuracy type: accuracy value: 0.6912751793861389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_rand_50_v2_cola This model is a fine-tuned version of [Hartunka/distilbert_rand_50_v2](https://huggingface.co/Hartunka/distilbert_rand_50_v2) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6141 - Matthews Correlation: 0.0 - Accuracy: 0.6913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:| | 0.6138 | 1.0 | 34 | 0.6141 | 0.0 | 0.6913 | | 0.5901 | 2.0 | 68 | 0.6175 | 0.0257 | 0.6913 | | 0.5446 | 3.0 | 102 | 0.6304 | 0.0832 | 0.6826 | | 0.4862 | 4.0 | 136 | 0.7370 | 0.0899 | 0.6366 | | 0.4254 | 5.0 | 170 | 0.7218 | 0.1086 | 0.6635 | | 0.372 | 6.0 | 204 | 0.8227 | 0.0725 | 0.6079 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
Hartunka/distilbert_rand_20_v2_mnli
Hartunka
2025-04-23T00:35:47Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_rand_20_v2", "base_model:finetune:Hartunka/distilbert_rand_20_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T23:41:15Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_rand_20_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_rand_20_v2_mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.6516476810414972 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_rand_20_v2_mnli This model is a fine-tuned version of [Hartunka/distilbert_rand_20_v2](https://huggingface.co/Hartunka/distilbert_rand_20_v2) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.8066 - Accuracy: 0.6516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9775 | 1.0 | 1534 | 0.9092 | 0.5704 | | 0.8776 | 2.0 | 3068 | 0.8706 | 0.6022 | | 0.7966 | 3.0 | 4602 | 0.8162 | 0.6407 | | 0.7182 | 4.0 | 6136 | 0.8156 | 0.6444 | | 0.6474 | 5.0 | 7670 | 0.8175 | 0.6536 | | 0.5759 | 6.0 | 9204 | 0.8543 | 0.6559 | | 0.5059 | 7.0 | 10738 | 0.9203 | 0.6520 | | 0.4401 | 8.0 | 12272 | 1.0364 | 0.6452 | | 0.3795 | 9.0 | 13806 | 1.1486 | 0.6417 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
ericjame/q-Taxi-v3
ericjame
2025-04-23T00:32:00Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-04-23T00:31:56Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ericjame/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Brianpuz/Qwen1.5-0.5B-GGUF
Brianpuz
2025-04-23T00:26:30Z
1
0
null
[ "gguf", "pretrained", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/Qwen1.5-0.5B", "base_model:quantized:Qwen/Qwen1.5-0.5B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-21T23:39:29Z
--- base_model: Qwen/Qwen1.5-0.5B language: - en license: other license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE pipeline_tag: text-generation tags: - pretrained - llama-cpp - gguf-my-repo --- *Produced by [Antigma Labs](https://antigma.ai)* ## llama.cpp quantization Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5170">b4944</a> for quantization. Original model: https://huggingface.co/Qwen/Qwen1.5-0.5B Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project ## Prompt format ``` <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | | -------- | ---------- | --------- | ----- | | [qwen1.5-0.5b-q4_k_m.gguf](https://huggingface.co/Brianpuz/Qwen1.5-0.5B-GGUF/blob/main/qwen1.5-0.5b-q4_k_m.gguf)|Q4_K_M|0.38 GB|False| ## Downloading using huggingface-cli <details> <summary>Click to view download instructions</summary> First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download https://huggingface.co/Brianpuz/Qwen1.5-0.5B-GGUF --include "qwen1.5-0.5b-q4_k_m.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download https://huggingface.co/Brianpuz/Qwen1.5-0.5B-GGUF --include "qwen1.5-0.5b-q4_k_m.gguf/*" --local-dir ./ ``` You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./) </details>
YOYO-AI/YOYO-O1-32B-V4-preview4-Q4_K_M-GGUF
YOYO-AI
2025-04-23T00:18:47Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:YOYO-AI/YOYO-O1-32B-V4-preview4", "base_model:quantized:YOYO-AI/YOYO-O1-32B-V4-preview4", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-23T00:17:16Z
--- base_model: YOYO-AI/YOYO-O1-32B-V4-preview4 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # YOYO-AI/YOYO-O1-32B-V4-preview4-Q4_K_M-GGUF This model was converted to GGUF format from [`YOYO-AI/YOYO-O1-32B-V4-preview4`](https://huggingface.co/YOYO-AI/YOYO-O1-32B-V4-preview4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/YOYO-AI/YOYO-O1-32B-V4-preview4) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo YOYO-AI/YOYO-O1-32B-V4-preview4-Q4_K_M-GGUF --hf-file yoyo-o1-32b-v4-preview4-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo YOYO-AI/YOYO-O1-32B-V4-preview4-Q4_K_M-GGUF --hf-file yoyo-o1-32b-v4-preview4-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo YOYO-AI/YOYO-O1-32B-V4-preview4-Q4_K_M-GGUF --hf-file yoyo-o1-32b-v4-preview4-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo YOYO-AI/YOYO-O1-32B-V4-preview4-Q4_K_M-GGUF --hf-file yoyo-o1-32b-v4-preview4-q4_k_m.gguf -c 2048 ```
LongwayLabs/v1-mn-lv
LongwayLabs
2025-04-23T00:12:35Z
0
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-23T00:12:26Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: MODELNAME license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # v1 mn lv <Gallery /> ## Model description v1-mn-lv ## Trigger words You should use `MODELNAME` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LongwayLabs/v1-mn-lv/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
MaryemOuichka/mistral_finetuned_demii
MaryemOuichka
2025-04-23T00:10:06Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1", "region:us" ]
null
2025-04-23T00:09:58Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.1 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
leeyunjai/yolo11-firedetect
leeyunjai
2025-04-23T00:08:09Z
13
0
ultralytics
[ "ultralytics", "yolo", "object-detect", "yolo11", "yolov11", "object-detection", "en", "region:us" ]
object-detection
2025-04-09T01:08:41Z
--- language: - en library_name: ultralytics pipeline_tag: object-detection tags: - yolo - object-detect - yolo11 - yolov11 --- # Number and Operator Detection Based on YOLO11x This repository contains a PyTorch-exported model for detecting fire and smoke using the YOLO11s architecture. The model has been trained to recognize these symbols in images and return their locations and classifications. ## Model Description The YOLO11s model is optimized for detecting the following: - fire, smoke ```text #class fire smoke ``` ## How to Use To use this model in your project, follow the steps below: ### 1. Installation Ensure you have the `ultralytics` library installed, which is used for YOLO models: ```bash pip install ultralytics ``` ### 2. Load the Model You can load the model and perform detection on an image as follows: ```python from ultralytics import YOLO # Load the model model = YOLO("./firedetect-11s.pt") # Perform detection on an image results = model("image.png") # Display or process the results results.show() # This will display the image with detected objects ``` ### 3. Model Inference The results object contains bounding boxes, labels (e.g., numbers or operators), and confidence scores for each detected object. Access them like this: ```python for result in results: print(result.boxes) # Bounding boxes print(result.names) # Detected classes print(result.scores) # Confidence scores ``` ![](result.png) #yolo11
edwindn/orpheus-1b-0.1-weightDecay-chkpt78201
edwindn
2025-04-23T00:07:33Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T00:06:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Urwis/Gpu
Urwis
2025-04-23T00:06:53Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-23T00:06:53Z
--- license: apache-2.0 ---
IParraMartin/impossible-llms-spanish-mirror-reversal
IParraMartin
2025-04-22T23:54:25Z
1
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-21T21:31:21Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: impossible-llms-spanish-mirror-reversal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # impossible-llms-spanish-mirror-reversal This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.9937 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 0 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 384 - total_eval_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 3000 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:--------:|:----:|:---------------:| | 84.4162 | 0.9180 | 7 | 10.3431 | | 77.8512 | 1.9180 | 14 | 9.5818 | | 74.3777 | 2.9180 | 21 | 9.2231 | | 72.9558 | 3.9180 | 28 | 9.0710 | | 72.1401 | 4.9180 | 35 | 8.9695 | | 70.8846 | 5.9180 | 42 | 8.8307 | | 69.6382 | 6.9180 | 49 | 8.6507 | | 67.9279 | 7.9180 | 56 | 8.4302 | | 66.2401 | 8.9180 | 63 | 8.2178 | | 64.6215 | 9.9180 | 70 | 8.0136 | | 62.8811 | 10.9180 | 77 | 7.8174 | | 61.1512 | 11.9180 | 84 | 7.6212 | | 59.7228 | 12.9180 | 91 | 7.4259 | | 58.0425 | 13.9180 | 98 | 7.2282 | | 56.6221 | 14.9180 | 105 | 7.0344 | | 55.1504 | 15.9180 | 112 | 6.8581 | | 53.5989 | 16.9180 | 119 | 6.6987 | | 52.7903 | 17.9180 | 126 | 6.5563 | | 51.6098 | 18.9180 | 133 | 6.4371 | | 50.7139 | 19.9180 | 140 | 6.3459 | | 50.102 | 20.9180 | 147 | 6.2674 | | 49.6932 | 21.9180 | 154 | 6.2228 | | 49.144 | 22.9180 | 161 | 6.1715 | | 48.9015 | 23.9180 | 168 | 6.1300 | | 48.7174 | 24.9180 | 175 | 6.1021 | | 48.2177 | 25.9180 | 182 | 6.0672 | | 48.1698 | 26.9180 | 189 | 6.0371 | | 48.0452 | 27.9180 | 196 | 6.0075 | | 47.6822 | 28.9180 | 203 | 5.9850 | | 47.0775 | 29.9180 | 210 | 5.9656 | | 47.0889 | 30.9180 | 217 | 5.9483 | | 47.0495 | 31.9180 | 224 | 5.9125 | | 46.8446 | 32.9180 | 231 | 5.8966 | | 46.8521 | 33.9180 | 238 | 5.8779 | | 46.5076 | 34.9180 | 245 | 5.8588 | | 46.2375 | 35.9180 | 252 | 5.8437 | | 46.1051 | 36.9180 | 259 | 5.8306 | | 45.9298 | 37.9180 | 266 | 5.8160 | | 45.6758 | 38.9180 | 273 | 5.8035 | | 45.7036 | 39.9180 | 280 | 5.7862 | | 45.5413 | 40.9180 | 287 | 5.7700 | | 45.2449 | 41.9180 | 294 | 5.7602 | | 45.101 | 42.9180 | 301 | 5.7457 | | 44.963 | 43.9180 | 308 | 5.7313 | | 44.778 | 44.9180 | 315 | 5.7205 | | 44.6973 | 45.9180 | 322 | 5.7077 | | 44.3128 | 46.9180 | 329 | 5.6902 | | 44.3436 | 47.9180 | 336 | 5.6782 | | 44.055 | 48.9180 | 343 | 5.6623 | | 43.8327 | 49.9180 | 350 | 5.6461 | | 43.6442 | 50.9180 | 357 | 5.6394 | | 43.3198 | 51.9180 | 364 | 5.6193 | | 43.2727 | 52.9180 | 371 | 5.6041 | | 43.1289 | 53.9180 | 378 | 5.5983 | | 42.9209 | 54.9180 | 385 | 5.5709 | | 42.9342 | 55.9180 | 392 | 5.5631 | | 42.4359 | 56.9180 | 399 | 5.5559 | | 42.3215 | 57.9180 | 406 | 5.5351 | | 42.3673 | 58.9180 | 413 | 5.5182 | | 41.9158 | 59.9180 | 420 | 5.5038 | | 41.9819 | 60.9180 | 427 | 5.4985 | | 41.7746 | 61.9180 | 434 | 5.4811 | | 41.4981 | 62.9180 | 441 | 5.4636 | | 41.2081 | 63.9180 | 448 | 5.4594 | | 41.0731 | 64.9180 | 455 | 5.4432 | | 41.1288 | 65.9180 | 462 | 5.4316 | | 40.8899 | 66.9180 | 469 | 5.4163 | | 40.5089 | 67.9180 | 476 | 5.4167 | | 40.5938 | 68.9180 | 483 | 5.4056 | | 40.1911 | 69.9180 | 490 | 5.3991 | | 39.9726 | 70.9180 | 497 | 5.3848 | | 39.8896 | 71.9180 | 504 | 5.3879 | | 39.7376 | 72.9180 | 511 | 5.3698 | | 39.3989 | 73.9180 | 518 | 5.3695 | | 39.1616 | 74.9180 | 525 | 5.3568 | | 39.1654 | 75.9180 | 532 | 5.3509 | | 39.0076 | 76.9180 | 539 | 5.3506 | | 38.861 | 77.9180 | 546 | 5.3558 | | 38.5128 | 78.9180 | 553 | 5.3403 | | 38.4616 | 79.9180 | 560 | 5.3381 | | 38.415 | 80.9180 | 567 | 5.3444 | | 38.059 | 81.9180 | 574 | 5.3374 | | 38.1389 | 82.9180 | 581 | 5.3389 | | 37.744 | 83.9180 | 588 | 5.3299 | | 37.8217 | 84.9180 | 595 | 5.3213 | | 37.398 | 85.9180 | 602 | 5.3205 | | 37.417 | 86.9180 | 609 | 5.3301 | | 37.2844 | 87.9180 | 616 | 5.3372 | | 37.1349 | 88.9180 | 623 | 5.3331 | | 36.9323 | 89.9180 | 630 | 5.3285 | | 36.8712 | 90.9180 | 637 | 5.3288 | | 36.6164 | 91.9180 | 644 | 5.3333 | | 36.3797 | 92.9180 | 651 | 5.3395 | | 36.1731 | 93.9180 | 658 | 5.3441 | | 35.9206 | 94.9180 | 665 | 5.3456 | | 35.8725 | 95.9180 | 672 | 5.3454 | | 35.7979 | 96.9180 | 679 | 5.3435 | | 35.6521 | 97.9180 | 686 | 5.3471 | | 35.3987 | 98.9180 | 693 | 5.3561 | | 35.3232 | 99.9180 | 700 | 5.3524 | | 35.1982 | 100.9180 | 707 | 5.3661 | | 34.886 | 101.9180 | 714 | 5.3686 | | 34.7132 | 102.9180 | 721 | 5.3709 | | 34.6847 | 103.9180 | 728 | 5.3774 | | 34.5539 | 104.9180 | 735 | 5.3897 | | 34.4671 | 105.9180 | 742 | 5.3989 | | 34.2363 | 106.9180 | 749 | 5.3929 | | 34.0945 | 107.9180 | 756 | 5.3963 | | 33.8505 | 108.9180 | 763 | 5.4139 | | 33.7776 | 109.9180 | 770 | 5.4137 | | 33.7077 | 110.9180 | 777 | 5.4283 | | 33.5768 | 111.9180 | 784 | 5.4255 | | 33.4114 | 112.9180 | 791 | 5.4368 | | 33.124 | 113.9180 | 798 | 5.4533 | | 33.1255 | 114.9180 | 805 | 5.4452 | | 32.9746 | 115.9180 | 812 | 5.4670 | | 32.9757 | 116.9180 | 819 | 5.4674 | | 32.7149 | 117.9180 | 826 | 5.4849 | | 32.4399 | 118.9180 | 833 | 5.4895 | | 32.6289 | 119.9180 | 840 | 5.5009 | | 32.3678 | 120.9180 | 847 | 5.5007 | | 32.1054 | 121.9180 | 854 | 5.5052 | | 31.9792 | 122.9180 | 861 | 5.5272 | | 32.0312 | 123.9180 | 868 | 5.5281 | | 31.8027 | 124.9180 | 875 | 5.5397 | | 31.7089 | 125.9180 | 882 | 5.5513 | | 31.4487 | 126.9180 | 889 | 5.5479 | | 31.3213 | 127.9180 | 896 | 5.5571 | | 31.2986 | 128.9180 | 903 | 5.5726 | | 31.1625 | 129.9180 | 910 | 5.5723 | | 31.0116 | 130.9180 | 917 | 5.5939 | | 30.9386 | 131.9180 | 924 | 5.6084 | | 30.6873 | 132.9180 | 931 | 5.6066 | | 30.5603 | 133.9180 | 938 | 5.6187 | | 30.4922 | 134.9180 | 945 | 5.6356 | | 30.5098 | 135.9180 | 952 | 5.6411 | | 30.3877 | 136.9180 | 959 | 5.6489 | | 30.0047 | 137.9180 | 966 | 5.6620 | | 29.982 | 138.9180 | 973 | 5.6814 | | 29.666 | 139.9180 | 980 | 5.6748 | | 29.7369 | 140.9180 | 987 | 5.6936 | | 29.5357 | 141.9180 | 994 | 5.7011 | | 29.4863 | 142.9180 | 1001 | 5.7023 | | 29.1884 | 143.9180 | 1008 | 5.7173 | | 29.2733 | 144.9180 | 1015 | 5.7391 | | 29.1444 | 145.9180 | 1022 | 5.7430 | | 28.9668 | 146.9180 | 1029 | 5.7628 | | 28.9572 | 147.9180 | 1036 | 5.7740 | | 28.6542 | 148.9180 | 1043 | 5.7703 | | 28.6571 | 149.9180 | 1050 | 5.7768 | | 28.4431 | 150.9180 | 1057 | 5.7939 | | 28.3375 | 151.9180 | 1064 | 5.8060 | | 28.2275 | 152.9180 | 1071 | 5.8154 | | 28.2147 | 153.9180 | 1078 | 5.8239 | | 28.0644 | 154.9180 | 1085 | 5.8309 | | 27.9749 | 155.9180 | 1092 | 5.8453 | | 27.8662 | 156.9180 | 1099 | 5.8514 | | 27.6157 | 157.9180 | 1106 | 5.8644 | | 27.3961 | 158.9180 | 1113 | 5.8822 | | 27.46 | 159.9180 | 1120 | 5.8790 | | 27.5323 | 160.9180 | 1127 | 5.8899 | | 27.1845 | 161.9180 | 1134 | 5.9004 | | 27.1134 | 162.9180 | 1141 | 5.9200 | | 27.0488 | 163.9180 | 1148 | 5.9319 | | 26.798 | 164.9180 | 1155 | 5.9282 | | 26.7074 | 165.9180 | 1162 | 5.9415 | | 26.7968 | 166.9180 | 1169 | 5.9535 | | 26.5976 | 167.9180 | 1176 | 5.9539 | | 26.6141 | 168.9180 | 1183 | 5.9724 | | 26.4868 | 169.9180 | 1190 | 5.9768 | | 26.0997 | 170.9180 | 1197 | 5.9941 | | 26.276 | 171.9180 | 1204 | 6.0098 | | 26.1329 | 172.9180 | 1211 | 6.0084 | | 25.9698 | 173.9180 | 1218 | 6.0104 | | 25.7919 | 174.9180 | 1225 | 6.0339 | | 25.7292 | 175.9180 | 1232 | 6.0430 | | 25.5487 | 176.9180 | 1239 | 6.0521 | | 25.6807 | 177.9180 | 1246 | 6.0665 | | 25.5744 | 178.9180 | 1253 | 6.0639 | | 25.4511 | 179.9180 | 1260 | 6.0746 | | 25.1839 | 180.9180 | 1267 | 6.0985 | | 25.102 | 181.9180 | 1274 | 6.0936 | | 25.2993 | 182.9180 | 1281 | 6.1054 | | 25.0789 | 183.9180 | 1288 | 6.1141 | | 24.9031 | 184.9180 | 1295 | 6.1289 | | 24.9472 | 185.9180 | 1302 | 6.1326 | | 24.7081 | 186.9180 | 1309 | 6.1548 | | 24.5715 | 187.9180 | 1316 | 6.1579 | | 24.5298 | 188.9180 | 1323 | 6.1540 | | 24.4873 | 189.9180 | 1330 | 6.1685 | | 24.2975 | 190.9180 | 1337 | 6.1855 | | 24.2701 | 191.9180 | 1344 | 6.1986 | | 24.2495 | 192.9180 | 1351 | 6.2024 | | 24.1608 | 193.9180 | 1358 | 6.2128 | | 23.9288 | 194.9180 | 1365 | 6.2151 | | 23.9611 | 195.9180 | 1372 | 6.2296 | | 23.8268 | 196.9180 | 1379 | 6.2358 | | 23.6677 | 197.9180 | 1386 | 6.2405 | | 23.7449 | 198.9180 | 1393 | 6.2461 | | 23.4324 | 199.9180 | 1400 | 6.2662 | | 23.4854 | 200.9180 | 1407 | 6.2655 | | 23.3554 | 201.9180 | 1414 | 6.2769 | | 23.167 | 202.9180 | 1421 | 6.2847 | | 23.2855 | 203.9180 | 1428 | 6.2861 | | 23.1166 | 204.9180 | 1435 | 6.3017 | | 23.0398 | 205.9180 | 1442 | 6.3119 | | 23.0255 | 206.9180 | 1449 | 6.3172 | | 22.999 | 207.9180 | 1456 | 6.3255 | | 22.7308 | 208.9180 | 1463 | 6.3393 | | 22.7178 | 209.9180 | 1470 | 6.3417 | | 22.6128 | 210.9180 | 1477 | 6.3461 | | 22.5973 | 211.9180 | 1484 | 6.3585 | | 22.6145 | 212.9180 | 1491 | 6.3666 | | 22.4369 | 213.9180 | 1498 | 6.3749 | | 22.3656 | 214.9180 | 1505 | 6.3802 | | 22.2833 | 215.9180 | 1512 | 6.3983 | | 22.1951 | 216.9180 | 1519 | 6.3926 | | 22.1625 | 217.9180 | 1526 | 6.4041 | | 21.998 | 218.9180 | 1533 | 6.4135 | | 21.991 | 219.9180 | 1540 | 6.4330 | | 21.9023 | 220.9180 | 1547 | 6.4238 | | 21.9138 | 221.9180 | 1554 | 6.4345 | | 21.9563 | 222.9180 | 1561 | 6.4423 | | 21.7125 | 223.9180 | 1568 | 6.4432 | | 21.6526 | 224.9180 | 1575 | 6.4544 | | 21.574 | 225.9180 | 1582 | 6.4674 | | 21.5197 | 226.9180 | 1589 | 6.4717 | | 21.475 | 227.9180 | 1596 | 6.4811 | | 21.3944 | 228.9180 | 1603 | 6.4886 | | 21.4001 | 229.9180 | 1610 | 6.4940 | | 21.2599 | 230.9180 | 1617 | 6.5045 | | 21.3074 | 231.9180 | 1624 | 6.5078 | | 21.0136 | 232.9180 | 1631 | 6.5144 | | 21.1079 | 233.9180 | 1638 | 6.5142 | | 21.0878 | 234.9180 | 1645 | 6.5233 | | 20.9775 | 235.9180 | 1652 | 6.5266 | | 20.9687 | 236.9180 | 1659 | 6.5404 | | 20.7984 | 237.9180 | 1666 | 6.5467 | | 20.7934 | 238.9180 | 1673 | 6.5521 | | 20.7419 | 239.9180 | 1680 | 6.5511 | | 20.5449 | 240.9180 | 1687 | 6.5641 | | 20.6149 | 241.9180 | 1694 | 6.5764 | | 20.6499 | 242.9180 | 1701 | 6.5704 | | 20.5261 | 243.9180 | 1708 | 6.5780 | | 20.4831 | 244.9180 | 1715 | 6.5889 | | 20.4239 | 245.9180 | 1722 | 6.5939 | | 20.2128 | 246.9180 | 1729 | 6.6054 | | 20.1934 | 247.9180 | 1736 | 6.6072 | | 20.1968 | 248.9180 | 1743 | 6.6114 | | 20.1866 | 249.9180 | 1750 | 6.6134 | | 20.104 | 250.9180 | 1757 | 6.6175 | | 20.0609 | 251.9180 | 1764 | 6.6316 | | 20.0985 | 252.9180 | 1771 | 6.6390 | | 19.9381 | 253.9180 | 1778 | 6.6366 | | 19.9409 | 254.9180 | 1785 | 6.6414 | | 19.8636 | 255.9180 | 1792 | 6.6460 | | 19.8073 | 256.9180 | 1799 | 6.6524 | | 19.8491 | 257.9180 | 1806 | 6.6585 | | 19.7852 | 258.9180 | 1813 | 6.6658 | | 19.6229 | 259.9180 | 1820 | 6.6708 | | 19.5722 | 260.9180 | 1827 | 6.6739 | | 19.5835 | 261.9180 | 1834 | 6.6854 | | 19.5987 | 262.9180 | 1841 | 6.6936 | | 19.4856 | 263.9180 | 1848 | 6.6930 | | 19.5904 | 264.9180 | 1855 | 6.6983 | | 19.3708 | 265.9180 | 1862 | 6.7038 | | 19.3553 | 266.9180 | 1869 | 6.7077 | | 19.3373 | 267.9180 | 1876 | 6.7083 | | 19.2019 | 268.9180 | 1883 | 6.7167 | | 19.1206 | 269.9180 | 1890 | 6.7259 | | 19.1018 | 270.9180 | 1897 | 6.7236 | | 19.2208 | 271.9180 | 1904 | 6.7353 | | 19.0552 | 272.9180 | 1911 | 6.7368 | | 19.0681 | 273.9180 | 1918 | 6.7394 | | 19.0372 | 274.9180 | 1925 | 6.7420 | | 19.0147 | 275.9180 | 1932 | 6.7465 | | 18.9359 | 276.9180 | 1939 | 6.7533 | | 18.9365 | 277.9180 | 1946 | 6.7533 | | 18.8647 | 278.9180 | 1953 | 6.7576 | | 18.7693 | 279.9180 | 1960 | 6.7628 | | 18.7637 | 280.9180 | 1967 | 6.7683 | | 18.8001 | 281.9180 | 1974 | 6.7683 | | 18.6263 | 282.9180 | 1981 | 6.7707 | | 18.6731 | 283.9180 | 1988 | 6.7820 | | 18.6376 | 284.9180 | 1995 | 6.7786 | | 18.6834 | 285.9180 | 2002 | 6.7890 | | 18.5305 | 286.9180 | 2009 | 6.7860 | | 18.5434 | 287.9180 | 2016 | 6.7951 | | 18.4738 | 288.9180 | 2023 | 6.7954 | | 18.5392 | 289.9180 | 2030 | 6.8031 | | 18.4525 | 290.9180 | 2037 | 6.8034 | | 18.3516 | 291.9180 | 2044 | 6.8089 | | 18.4084 | 292.9180 | 2051 | 6.8107 | | 18.3341 | 293.9180 | 2058 | 6.8176 | | 18.2295 | 294.9180 | 2065 | 6.8240 | | 18.2289 | 295.9180 | 2072 | 6.8254 | | 18.2957 | 296.9180 | 2079 | 6.8273 | | 18.1978 | 297.9180 | 2086 | 6.8269 | | 18.1374 | 298.9180 | 2093 | 6.8368 | | 18.1589 | 299.9180 | 2100 | 6.8345 | | 18.0843 | 300.9180 | 2107 | 6.8365 | | 18.0587 | 301.9180 | 2114 | 6.8508 | | 17.9929 | 302.9180 | 2121 | 6.8402 | | 17.9596 | 303.9180 | 2128 | 6.8487 | | 17.9953 | 304.9180 | 2135 | 6.8466 | | 17.969 | 305.9180 | 2142 | 6.8532 | | 18.0339 | 306.9180 | 2149 | 6.8526 | | 17.8757 | 307.9180 | 2156 | 6.8600 | | 17.8847 | 308.9180 | 2163 | 6.8596 | | 17.8781 | 309.9180 | 2170 | 6.8660 | | 17.8195 | 310.9180 | 2177 | 6.8693 | | 17.8741 | 311.9180 | 2184 | 6.8666 | | 17.7714 | 312.9180 | 2191 | 6.8732 | | 17.7876 | 313.9180 | 2198 | 6.8768 | | 17.7111 | 314.9180 | 2205 | 6.8820 | | 17.7864 | 315.9180 | 2212 | 6.8847 | | 17.7075 | 316.9180 | 2219 | 6.8839 | | 17.5483 | 317.9180 | 2226 | 6.8867 | | 17.7455 | 318.9180 | 2233 | 6.8962 | | 17.598 | 319.9180 | 2240 | 6.8898 | | 17.6425 | 320.9180 | 2247 | 6.8977 | | 17.6195 | 321.9180 | 2254 | 6.8918 | | 17.5003 | 322.9180 | 2261 | 6.8971 | | 17.5788 | 323.9180 | 2268 | 6.9069 | | 17.5225 | 324.9180 | 2275 | 6.9025 | | 17.5252 | 325.9180 | 2282 | 6.9068 | | 17.5761 | 326.9180 | 2289 | 6.9104 | | 17.4598 | 327.9180 | 2296 | 6.9088 | | 17.3877 | 328.9180 | 2303 | 6.9135 | | 17.3781 | 329.9180 | 2310 | 6.9171 | | 17.4783 | 330.9180 | 2317 | 6.9221 | | 17.295 | 331.9180 | 2324 | 6.9189 | | 17.3924 | 332.9180 | 2331 | 6.9210 | | 17.2561 | 333.9180 | 2338 | 6.9229 | | 17.3171 | 334.9180 | 2345 | 6.9279 | | 17.3314 | 335.9180 | 2352 | 6.9260 | | 17.345 | 336.9180 | 2359 | 6.9280 | | 17.2402 | 337.9180 | 2366 | 6.9335 | | 17.2594 | 338.9180 | 2373 | 6.9359 | | 17.1937 | 339.9180 | 2380 | 6.9325 | | 17.1731 | 340.9180 | 2387 | 6.9320 | | 17.2473 | 341.9180 | 2394 | 6.9390 | | 17.1868 | 342.9180 | 2401 | 6.9378 | | 17.1588 | 343.9180 | 2408 | 6.9383 | | 17.1417 | 344.9180 | 2415 | 6.9439 | | 17.0871 | 345.9180 | 2422 | 6.9438 | | 17.104 | 346.9180 | 2429 | 6.9450 | | 17.1095 | 347.9180 | 2436 | 6.9461 | | 17.1458 | 348.9180 | 2443 | 6.9487 | | 17.0723 | 349.9180 | 2450 | 6.9488 | | 17.1555 | 350.9180 | 2457 | 6.9481 | | 17.107 | 351.9180 | 2464 | 6.9551 | | 17.0555 | 352.9180 | 2471 | 6.9532 | | 17.057 | 353.9180 | 2478 | 6.9550 | | 17.0571 | 354.9180 | 2485 | 6.9561 | | 17.0464 | 355.9180 | 2492 | 6.9564 | | 16.9419 | 356.9180 | 2499 | 6.9552 | | 16.9971 | 357.9180 | 2506 | 6.9591 | | 17.0158 | 358.9180 | 2513 | 6.9612 | | 16.9852 | 359.9180 | 2520 | 6.9609 | | 16.9336 | 360.9180 | 2527 | 6.9651 | | 16.9507 | 361.9180 | 2534 | 6.9685 | | 16.9286 | 362.9180 | 2541 | 6.9668 | | 16.8417 | 363.9180 | 2548 | 6.9698 | | 16.9085 | 364.9180 | 2555 | 6.9729 | | 16.9229 | 365.9180 | 2562 | 6.9705 | | 16.893 | 366.9180 | 2569 | 6.9724 | | 16.8789 | 367.9180 | 2576 | 6.9681 | | 16.8963 | 368.9180 | 2583 | 6.9730 | | 16.8282 | 369.9180 | 2590 | 6.9736 | | 16.8398 | 370.9180 | 2597 | 6.9757 | | 16.8059 | 371.9180 | 2604 | 6.9758 | | 16.8391 | 372.9180 | 2611 | 6.9773 | | 16.9314 | 373.9180 | 2618 | 6.9767 | | 16.8705 | 374.9180 | 2625 | 6.9770 | | 16.7638 | 375.9180 | 2632 | 6.9794 | | 16.8538 | 376.9180 | 2639 | 6.9801 | | 16.7878 | 377.9180 | 2646 | 6.9798 | | 16.786 | 378.9180 | 2653 | 6.9828 | | 16.7546 | 379.9180 | 2660 | 6.9813 | | 16.8046 | 380.9180 | 2667 | 6.9815 | | 16.7852 | 381.9180 | 2674 | 6.9852 | | 16.734 | 382.9180 | 2681 | 6.9834 | | 16.8187 | 383.9180 | 2688 | 6.9820 | | 16.7764 | 384.9180 | 2695 | 6.9857 | | 16.7835 | 385.9180 | 2702 | 6.9861 | | 16.7463 | 386.9180 | 2709 | 6.9860 | | 16.6309 | 387.9180 | 2716 | 6.9865 | | 16.6992 | 388.9180 | 2723 | 6.9881 | | 16.7021 | 389.9180 | 2730 | 6.9872 | | 16.7778 | 390.9180 | 2737 | 6.9873 | | 16.784 | 391.9180 | 2744 | 6.9870 | | 16.7504 | 392.9180 | 2751 | 6.9877 | | 16.7041 | 393.9180 | 2758 | 6.9891 | | 16.7505 | 394.9180 | 2765 | 6.9917 | | 16.7962 | 395.9180 | 2772 | 6.9908 | | 16.7077 | 396.9180 | 2779 | 6.9912 | | 16.7166 | 397.9180 | 2786 | 6.9910 | | 16.7462 | 398.9180 | 2793 | 6.9917 | | 16.713 | 399.9180 | 2800 | 6.9915 | | 16.6515 | 400.9180 | 2807 | 6.9921 | | 16.7043 | 401.9180 | 2814 | 6.9916 | | 16.719 | 402.9180 | 2821 | 6.9915 | | 16.697 | 403.9180 | 2828 | 6.9929 | | 16.7353 | 404.9180 | 2835 | 6.9926 | | 16.7601 | 405.9180 | 2842 | 6.9916 | | 16.6814 | 406.9180 | 2849 | 6.9921 | | 16.7516 | 407.9180 | 2856 | 6.9929 | | 16.6698 | 408.9180 | 2863 | 6.9931 | | 16.6765 | 409.9180 | 2870 | 6.9941 | | 16.6709 | 410.9180 | 2877 | 6.9936 | | 16.7178 | 411.9180 | 2884 | 6.9932 | | 16.6784 | 412.9180 | 2891 | 6.9935 | | 16.7612 | 413.9180 | 2898 | 6.9933 | | 16.7469 | 414.9180 | 2905 | 6.9932 | | 16.6571 | 415.9180 | 2912 | 6.9934 | | 16.6858 | 416.9180 | 2919 | 6.9936 | | 16.6591 | 417.9180 | 2926 | 6.9935 | | 16.7057 | 418.9180 | 2933 | 6.9935 | | 16.7523 | 419.9180 | 2940 | 6.9936 | | 16.7288 | 420.9180 | 2947 | 6.9936 | | 16.6824 | 421.9180 | 2954 | 6.9936 | | 16.6956 | 422.9180 | 2961 | 6.9937 | | 16.659 | 423.9180 | 2968 | 6.9937 | | 16.6825 | 424.9180 | 2975 | 6.9937 | | 16.6794 | 425.9180 | 2982 | 6.9937 | | 16.677 | 426.9180 | 2989 | 6.9937 | | 16.6037 | 427.9180 | 2996 | 6.9937 | | 16.6751 | 428.5246 | 3000 | 6.9937 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.4.0+cu121 - Datasets 3.4.0 - Tokenizers 0.21.0
thiomajid/codebert-java-inconsistency
thiomajid
2025-04-22T23:54:17Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:microsoft/codebert-base", "base_model:finetune:microsoft/codebert-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T23:48:38Z
--- library_name: transformers base_model: microsoft/codebert-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: codebert-java-inconsistency results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codebert-java-inconsistency This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3543 - Accuracy: 0.9167 - F1: 0.9183 - Precision: 0.9235 - Recall: 0.9167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.4625 | 3.1290 | 50 | 0.8954 | 0.7531 | 0.7554 | 0.7765 | 0.7531 | | 0.5834 | 6.2581 | 100 | 0.5559 | 0.8189 | 0.8241 | 0.8483 | 0.8189 | | 0.2858 | 9.3871 | 150 | 0.4046 | 0.8930 | 0.8945 | 0.8995 | 0.8930 | | 0.1624 | 12.5161 | 200 | 0.4461 | 0.8642 | 0.8661 | 0.8750 | 0.8642 | | 0.1084 | 15.6452 | 250 | 0.4012 | 0.9012 | 0.9038 | 0.9123 | 0.9012 | | 0.074 | 18.7742 | 300 | 0.4689 | 0.8765 | 0.8817 | 0.8972 | 0.8765 | | 0.0574 | 21.9032 | 350 | 0.4885 | 0.8807 | 0.8845 | 0.8970 | 0.8807 | | 0.0452 | 25.0 | 400 | 0.4900 | 0.8848 | 0.8888 | 0.9011 | 0.8848 | | 0.0396 | 28.1290 | 450 | 0.4896 | 0.8765 | 0.8805 | 0.8934 | 0.8765 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Shaelois/MeetingScript
Shaelois
2025-04-22T23:49:23Z
13
1
transformers
[ "transformers", "safetensors", "bigbird_pegasus", "text2text-generation", "summarization", "en", "dataset:huuuyeah/meetingbank", "base_model:google/bigbird-pegasus-large-bigpatent", "base_model:finetune:google/bigbird-pegasus-large-bigpatent", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2025-04-20T01:04:25Z
--- license: apache-2.0 datasets: - huuuyeah/meetingbank language: - en metrics: - rouge base_model: - google/bigbird-pegasus-large-bigpatent pipeline_tag: summarization library_name: transformers --- # MeetingScript > A BigBird‐Pegasus model fine‑tuned for meeting transcription summarization on the MeetingBank dataset. 📦 **Model Files** - **Weights & config**: `pytorch_model.bin`, `config.json` - **Tokenizer**: `tokenizer.json`, `tokenizer_config.json`, `merges.txt`, `special_tokens_map.json` - **Generation defaults**: `generation_config.json` 🔗 **Hub:** https://github.com/kevin0437/Meeting_scripts --- ## Model Description **MeetingScript** is a sequence‑to‑sequence model based on [google/bigbird-pegasus-large-bigpatent](https://huggingface.co/google/bigbird-pegasus-large-bigpatent) and fine‑tuned on the [MeetingBank](https://huggingface.co/datasets/huuuyeah/meetingbank) corpus of meeting transcripts paired with human‐written summaries. It is designed to take long meeting transcripts (up to 4096 tokens) and produce concise, coherent summaries. --- ## Evaluation Results Evaluated on the held‑out test split of MeetingBank (≈ 600 transcripts), using beam search (4 beams, max_length=600): | Metric | F1 Score (%) | |-------------|-------------:| | **ROUGE‑1** | 51.5556 | | **ROUGE‑2** | 38.5378 | | **ROUGE‑L** | 48.0786 | | **ROUGE‑Lsum** | 48.0142 | --- ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch # 1) Load from the Hub tokenizer = AutoTokenizer.from_pretrained("Shaelois/MeetingScript") model = AutoModelForSeq2SeqLM.from_pretrained("Shaelois/MeetingScript") # 2) Summarize a long transcript transcript = """ Alice: Good morning everyone, let’s get started… Bob: I updated the design mockups… … (thousands of words) … """ inputs = tokenizer( transcript, max_length=4096, truncation=True, return_tensors="pt" ).to("cuda") summary_ids = model.generate( **inputs, num_beams=4, max_length=150, early_stopping=True ) summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print("📝 Summary:", summary) ``` --- ## Training Data Dataset: MeetingBank Splits: Train (5000+), Validation (600+), Test (600+) Preprocessing: Sliding‑window chunking for sequences > 4096 tokens
Kariuki20/Eslam
Kariuki20
2025-04-22T23:47:44Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-22T23:47:44Z
--- license: apache-2.0 ---
NaomiH/model
NaomiH
2025-04-22T23:45:48Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T23:41:17Z
--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** NaomiH - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Hartunka/distilbert_rand_20_v2_stsb
Hartunka
2025-04-22T23:39:37Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_rand_20_v2", "base_model:finetune:Hartunka/distilbert_rand_20_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T23:38:05Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_rand_20_v2 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: distilbert_rand_20_v2_stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.29365361139119855 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_rand_20_v2_stsb This model is a fine-tuned version of [Hartunka/distilbert_rand_20_v2](https://huggingface.co/Hartunka/distilbert_rand_20_v2) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.3318 - Pearson: 0.3011 - Spearmanr: 0.2937 - Combined Score: 0.2974 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 2.8409 | 1.0 | 23 | 2.6483 | 0.1073 | 0.0837 | 0.0955 | | 1.937 | 2.0 | 46 | 2.4296 | 0.1982 | 0.1718 | 0.1850 | | 1.7127 | 3.0 | 69 | 2.4168 | 0.2334 | 0.2205 | 0.2270 | | 1.3482 | 4.0 | 92 | 2.3318 | 0.3011 | 0.2937 | 0.2974 | | 0.9691 | 5.0 | 115 | 2.5006 | 0.3014 | 0.2903 | 0.2959 | | 0.7285 | 6.0 | 138 | 2.4679 | 0.3349 | 0.3254 | 0.3302 | | 0.572 | 7.0 | 161 | 2.5069 | 0.3510 | 0.3474 | 0.3492 | | 0.4434 | 8.0 | 184 | 2.4404 | 0.3636 | 0.3552 | 0.3594 | | 0.3722 | 9.0 | 207 | 2.3603 | 0.3501 | 0.3421 | 0.3461 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
SaddamHasanov213/Qwen2.5-3B-instruct-BankBotv3-GGUF
SaddamHasanov213
2025-04-22T23:39:19Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-22T23:37:47Z
--- base_model: unsloth/Qwen2.5-3B-instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** SaddamHasanov213 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
edwindn/orpheus-1b-0.1
edwindn
2025-04-22T23:38:48Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T23:36:12Z
--- library_name: transformers license: llama3.2 base_model: meta-llama/Llama-3.2-1B tags: - generated_from_trainer model-index: - name: orpheus-1b-0.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # orpheus-1b-0.1 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Hartunka/distilbert_rand_20_v2_sst2
Hartunka
2025-04-22T23:37:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_rand_20_v2", "base_model:finetune:Hartunka/distilbert_rand_20_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T23:31:24Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_rand_20_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_rand_20_v2_sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.8027522935779816 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_rand_20_v2_sst2 This model is a fine-tuned version of [Hartunka/distilbert_rand_20_v2](https://huggingface.co/Hartunka/distilbert_rand_20_v2) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4456 - Accuracy: 0.8028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3907 | 1.0 | 264 | 0.4456 | 0.8028 | | 0.2199 | 2.0 | 528 | 0.4952 | 0.8245 | | 0.1616 | 3.0 | 792 | 0.4961 | 0.8188 | | 0.1207 | 4.0 | 1056 | 0.6331 | 0.8062 | | 0.0916 | 5.0 | 1320 | 0.6101 | 0.7970 | | 0.0728 | 6.0 | 1584 | 0.7173 | 0.8005 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
quangdegi002/quang
quangdegi002
2025-04-22T23:34:06Z
0
0
null
[ "license:bsd-2-clause", "region:us" ]
null
2025-04-22T23:34:06Z
--- license: bsd-2-clause ---
SaddamHasanov213/Qwen2.5-3B-instruct-BankBotv3
SaddamHasanov213
2025-04-22T23:32:13Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T23:28:43Z
--- base_model: unsloth/Qwen2.5-3B-instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** SaddamHasanov213 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rahatneuron/llama3.1_8B_hellaswag_norm_8L
rahatneuron
2025-04-22T23:32:08Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T23:28:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlfoundations-dev/openthoughts2_100k_32B
mlfoundations-dev
2025-04-22T23:31:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T07:25:41Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-32B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: openthoughts2_100k_32B results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openthoughts2_100k_32B This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the mlfoundations-dev/openthoughts2_100k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 128 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - total_eval_batch_size: 1024 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.3.0 - Datasets 3.1.0 - Tokenizers 0.20.3
Hartunka/distilbert_rand_20_v2_rte
Hartunka
2025-04-22T23:31:12Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_rand_20_v2", "base_model:finetune:Hartunka/distilbert_rand_20_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T23:30:27Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_rand_20_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_rand_20_v2_rte results: - task: name: Text Classification type: text-classification dataset: name: GLUE RTE type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.5342960288808665 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_rand_20_v2_rte This model is a fine-tuned version of [Hartunka/distilbert_rand_20_v2](https://huggingface.co/Hartunka/distilbert_rand_20_v2) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6893 - Accuracy: 0.5343 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6973 | 1.0 | 10 | 0.6893 | 0.5343 | | 0.6835 | 2.0 | 20 | 0.6986 | 0.5271 | | 0.6373 | 3.0 | 30 | 0.7705 | 0.5126 | | 0.5447 | 4.0 | 40 | 0.8775 | 0.4982 | | 0.4205 | 5.0 | 50 | 1.0983 | 0.4801 | | 0.2894 | 6.0 | 60 | 1.4305 | 0.5018 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
rahatneuron/llama3.1_8B_hellaswag_norm_7L
rahatneuron
2025-04-22T23:29:49Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T23:26:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xw17/Phi-3.5-mini-instruct_finetuned_1_optimized1
xw17
2025-04-22T23:28:50Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T23:25:25Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lekhansh/Qwen2.5-3B-Instruct-Scientific-Text-Cleaner
Lekhansh
2025-04-22T23:26:11Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-22T18:29:40Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nathanialhunt2000/661b7c50-01c8-43c5-b785-b0ed3229b3f6
nathanialhunt2000
2025-04-22T23:21:40Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:unsloth/mistral-7b-instruct-v0.2", "base_model:adapter:unsloth/mistral-7b-instruct-v0.2", "region:us" ]
null
2025-04-22T23:20:43Z
--- library_name: peft tags: - generated_from_trainer base_model: unsloth/mistral-7b-instruct-v0.2 model-index: - name: nathanialhunt2000/661b7c50-01c8-43c5-b785-b0ed3229b3f6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nathanialhunt2000/661b7c50-01c8-43c5-b785-b0ed3229b3f6 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
Detomo/cl-nagoya-sup-simcse-ja-nss-v1_1
Detomo
2025-04-22T23:19:10Z
0
0
sentence-transformers
[ "sentence-transformers", "onnx", "safetensors", "openvino", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:16199", "loss:CustomBatchAllTripletLoss", "arxiv:1908.10084", "arxiv:1703.07737", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-22T14:09:11Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:16199 - loss:CustomBatchAllTripletLoss widget: - source_sentence: 科目:コンクリート。名称:立上り壁コンクリート。 sentences: - 科目:ユニット及びその他。名称:棚。 - 科目:ユニット及びその他。名称:事務室スチールパーティション。 - 科目:ユニット及びその他。名称:F-R#収納棚。 - source_sentence: 科目:タイル。名称:段鼻タイル。 sentences: - 科目:タイル。名称:巾木磁器質タイル。 - 科目:タイル。名称:立上りタイルA。 - 科目:タイル。名称:アプローチテラス立上り天端床タイルA。 - source_sentence: 科目:ユニット及びその他。名称:#階F-WC#他パウダーカウンター。 sentences: - 科目:ユニット及びその他。名称:便所フック(二段)。 - 科目:ユニット及びその他。名称:テラス床ウッドデッキ。 - 科目:ユニット及びその他。名称:フラットテラス床ウッドデッキ。 - source_sentence: 科目:ユニット及びその他。名称:階数表示+停止階案内サイン。 sentences: - 科目:ユニット及びその他。名称:エレベーターホール入口サイン。 - 科目:ユニット及びその他。名称:場外離着陸用オイルトラップ。 - 科目:ユニット及びその他。名称:器材カウンター。 - source_sentence: 科目:ユニット及びその他。名称:階段内踊場階数サイン。 sentences: - 科目:ユニット及びその他。名称:F-T#布団収納棚。 - 科目:ユニット及びその他。名称:#F廊下#飾り棚。 - 科目:ユニット及びその他。名称:F-#階理科室#収納棚。 pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Detomo/cl-nagoya-sup-simcse-ja-nss-v1_1") # Run inference sentences = [ '科目:ユニット及びその他。名称:階段内踊場階数サイン。', '科目:ユニット及びその他。名称:F-#階理科室#収納棚。', '科目:ユニット及びその他。名称:F-T#布団収納棚。', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 16,199 training samples * Columns: <code>sentence</code> and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence | label | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | type | string | int | | details | <ul><li>min: 11 tokens</li><li>mean: 18.73 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>0: ~0.30%</li><li>1: ~0.30%</li><li>2: ~0.30%</li><li>3: ~0.30%</li><li>4: ~2.40%</li><li>5: ~0.30%</li><li>6: ~0.30%</li><li>7: ~0.30%</li><li>8: ~0.30%</li><li>9: ~0.30%</li><li>10: ~0.30%</li><li>11: ~0.40%</li><li>12: ~0.30%</li><li>13: ~0.30%</li><li>14: ~0.40%</li><li>15: ~0.30%</li><li>16: ~0.30%</li><li>17: ~0.30%</li><li>18: ~0.90%</li><li>19: ~0.30%</li><li>20: ~1.30%</li><li>21: ~0.30%</li><li>22: ~1.10%</li><li>23: ~0.30%</li><li>24: ~0.30%</li><li>25: ~0.30%</li><li>26: ~0.30%</li><li>27: ~0.30%</li><li>28: ~0.30%</li><li>29: ~0.30%</li><li>30: ~0.30%</li><li>31: ~0.30%</li><li>32: ~0.30%</li><li>33: ~0.30%</li><li>34: ~0.30%</li><li>35: ~0.30%</li><li>36: ~0.30%</li><li>37: ~0.30%</li><li>38: ~0.30%</li><li>39: ~0.30%</li><li>40: ~0.40%</li><li>41: ~0.30%</li><li>42: ~0.30%</li><li>43: ~0.30%</li><li>44: ~0.60%</li><li>45: ~0.70%</li><li>46: ~0.30%</li><li>47: ~0.30%</li><li>48: ~0.30%</li><li>49: ~0.30%</li><li>50: ~0.30%</li><li>51: ~0.30%</li><li>52: ~0.30%</li><li>53: ~0.30%</li><li>54: ~0.30%</li><li>55: ~0.30%</li><li>56: ~0.30%</li><li>57: ~0.80%</li><li>58: ~0.30%</li><li>59: ~0.30%</li><li>60: ~0.60%</li><li>61: ~0.30%</li><li>62: ~0.30%</li><li>63: ~0.30%</li><li>64: ~0.50%</li><li>65: ~0.30%</li><li>66: ~0.30%</li><li>67: ~0.30%</li><li>68: ~0.30%</li><li>69: ~0.50%</li><li>70: ~0.60%</li><li>71: ~0.30%</li><li>72: ~0.30%</li><li>73: ~0.30%</li><li>74: ~0.30%</li><li>75: ~0.30%</li><li>76: ~0.30%</li><li>77: ~0.30%</li><li>78: ~0.30%</li><li>79: ~0.30%</li><li>80: ~0.30%</li><li>81: ~0.30%</li><li>82: ~0.30%</li><li>83: ~0.30%</li><li>84: ~0.80%</li><li>85: ~0.60%</li><li>86: ~0.50%</li><li>87: ~0.30%</li><li>88: ~0.30%</li><li>89: ~16.30%</li><li>90: ~0.30%</li><li>91: ~0.30%</li><li>92: ~0.30%</li><li>93: ~0.30%</li><li>94: ~0.30%</li><li>95: ~0.30%</li><li>96: ~0.30%</li><li>97: ~0.30%</li><li>98: ~0.50%</li><li>99: ~0.30%</li><li>100: ~0.30%</li><li>101: ~0.30%</li><li>102: ~0.30%</li><li>103: ~0.30%</li><li>104: ~0.30%</li><li>105: ~0.30%</li><li>106: ~0.30%</li><li>107: ~0.70%</li><li>108: ~0.30%</li><li>109: ~3.20%</li><li>110: ~0.30%</li><li>111: ~0.40%</li><li>112: ~2.30%</li><li>113: ~0.30%</li><li>114: ~0.30%</li><li>115: ~0.50%</li><li>116: ~0.50%</li><li>117: ~0.50%</li><li>118: ~0.40%</li><li>119: ~0.30%</li><li>120: ~0.30%</li><li>121: ~0.30%</li><li>122: ~0.80%</li><li>123: ~0.30%</li><li>124: ~0.30%</li><li>125: ~0.30%</li><li>126: ~0.30%</li><li>127: ~0.30%</li><li>128: ~0.30%</li><li>129: ~0.30%</li><li>130: ~0.30%</li><li>131: ~0.50%</li><li>132: ~0.30%</li><li>133: ~0.40%</li><li>134: ~0.30%</li><li>135: ~0.30%</li><li>136: ~0.30%</li><li>137: ~0.30%</li><li>138: ~0.30%</li><li>139: ~0.30%</li><li>140: ~0.30%</li><li>141: ~0.30%</li><li>142: ~0.30%</li><li>143: ~0.30%</li><li>144: ~0.40%</li><li>145: ~0.30%</li><li>146: ~0.30%</li><li>147: ~0.30%</li><li>148: ~0.30%</li><li>149: ~0.30%</li><li>150: ~0.30%</li><li>151: ~0.70%</li><li>152: ~0.30%</li><li>153: ~0.30%</li><li>154: ~0.30%</li><li>155: ~1.30%</li><li>156: ~0.30%</li><li>157: ~0.40%</li><li>158: ~0.30%</li><li>159: ~0.30%</li><li>160: ~0.30%</li><li>161: ~1.50%</li><li>162: ~0.30%</li><li>163: ~0.30%</li><li>164: ~0.30%</li><li>165: ~0.30%</li><li>166: ~0.30%</li><li>167: ~0.30%</li><li>168: ~0.30%</li><li>169: ~1.50%</li><li>170: ~0.30%</li><li>171: ~0.30%</li><li>172: ~7.20%</li><li>173: ~0.30%</li><li>174: ~1.00%</li><li>175: ~0.30%</li><li>176: ~0.30%</li><li>177: ~0.30%</li><li>178: ~1.80%</li><li>179: ~0.30%</li><li>180: ~0.50%</li><li>181: ~0.70%</li><li>182: ~0.30%</li><li>183: ~0.30%</li></ul> | * Samples: | sentence | label | |:-----------------------------------------|:---------------| | <code>科目:コンクリート。名称:免震基礎天端グラウト注入。</code> | <code>0</code> | | <code>科目:コンクリート。名称:免震基礎天端グラウト注入。</code> | <code>0</code> | | <code>科目:コンクリート。名称:免震基礎天端グラウト注入。</code> | <code>0</code> | * Loss: <code>sentence_transformer_lib.custom_batch_all_trip_loss.CustomBatchAllTripletLoss</code> ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 512 - `per_device_eval_batch_size`: 512 - `learning_rate`: 1e-05 - `weight_decay`: 0.01 - `num_train_epochs`: 250 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: group_by_label #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 512 - `per_device_eval_batch_size`: 512 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 250 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: group_by_label - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | |:-------:|:----:|:-------------:| | 4.125 | 100 | 0.0682 | | 8.25 | 200 | 0.0745 | | 12.375 | 300 | 0.0764 | | 16.5 | 400 | 0.0778 | | 20.625 | 500 | 0.077 | | 24.75 | 600 | 0.0767 | | 29.125 | 700 | 0.0738 | | 33.25 | 800 | 0.0701 | | 37.375 | 900 | 0.0677 | | 41.5 | 1000 | 0.0689 | | 45.625 | 1100 | 0.0661 | | 49.75 | 1200 | 0.0677 | | 54.125 | 1300 | 0.0627 | | 58.25 | 1400 | 0.0629 | | 62.375 | 1500 | 0.0625 | | 66.5 | 1600 | 0.0655 | | 70.625 | 1700 | 0.0645 | | 74.75 | 1800 | 0.0595 | | 79.125 | 1900 | 0.0608 | | 83.25 | 2000 | 0.0614 | | 87.375 | 2100 | 0.0567 | | 91.5 | 2200 | 0.0612 | | 95.625 | 2300 | 0.0599 | | 99.75 | 2400 | 0.059 | | 104.125 | 2500 | 0.0547 | | 108.25 | 2600 | 0.0571 | | 112.375 | 2700 | 0.0543 | | 116.5 | 2800 | 0.0574 | | 120.625 | 2900 | 0.0561 | | 124.75 | 3000 | 0.0534 | | 129.125 | 3100 | 0.0554 | | 133.25 | 3200 | 0.0507 | | 137.375 | 3300 | 0.0533 | | 141.5 | 3400 | 0.05 | | 145.625 | 3500 | 0.0569 | | 149.75 | 3600 | 0.0551 | | 154.125 | 3700 | 0.0558 | | 158.25 | 3800 | 0.0539 | | 162.375 | 3900 | 0.0498 | | 166.5 | 4000 | 0.0512 | | 170.625 | 4100 | 0.0481 | | 174.75 | 4200 | 0.0492 | | 179.125 | 4300 | 0.0513 | | 183.25 | 4400 | 0.0474 | | 187.375 | 4500 | 0.0491 | | 191.5 | 4600 | 0.0513 | | 195.625 | 4700 | 0.0453 | | 199.75 | 4800 | 0.0453 | | 204.125 | 4900 | 0.0489 | | 208.25 | 5000 | 0.0481 | | 212.375 | 5100 | 0.0498 | | 216.5 | 5200 | 0.044 | | 220.625 | 5300 | 0.0486 | | 224.75 | 5400 | 0.0399 | | 229.125 | 5500 | 0.0384 | | 233.25 | 5600 | 0.0428 | | 237.375 | 5700 | 0.0447 | | 241.5 | 5800 | 0.0479 | | 245.625 | 5900 | 0.0434 | | 249.75 | 6000 | 0.0442 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 3.4.1 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.2 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CustomBatchAllTripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
peekayitachi/roberta-political-bias
peekayitachi
2025-04-22T23:15:55Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "dataset:pranjali97/Bias-detection-combined", "dataset:valurank/PoliticalBias_AllSides_Txt", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T22:41:08Z
--- library_name: transformers license: mit datasets: - pranjali97/Bias-detection-combined - valurank/PoliticalBias_AllSides_Txt metrics: - accuracy - f1 --- # Model Card for Model ID --- library_name: transformers license: mit tags: - roberta - text-classification - political-bias - transformers - nlp - fine-tuned datasets: - pranjali97/Bias-detection-combined - peekayitachi/allsides - custom-political-bias-data --- # 🧠 RoBERTa Political Bias Classifier This is a fine-tuned [RoBERTa](https://huggingface.co/roberta-base) model for **political bias detection** in text. It classifies a sentence or article snippet into one of the following three categories: - 🔴 **Right** - 🟡 **Center** - 🔵 **Left** Trained on a combination of public and custom-labeled datasets, the model is capable of classifying political leaning in Indian and general English news/opinion text. --- ## 📥 Example Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained("peekayitachi/roberta-political-bias") tokenizer = AutoTokenizer.from_pretrained("peekayitachi/roberta-political-bias") text = "Our nation's sovereignty must be protected, and we should prioritize national interests." inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True) with torch.no_grad(): logits = model(**inputs).logits predicted = torch.argmax(logits, dim=1).item() label_map = {0: "Left", 1: "Center", 2: "Right"} print("Predicted Bias:", label_map[predicted]) ## Model Details ### Model Description Base model: roberta-base Architecture: Transformer encoder with classification head Fine-tuned on: Multi-source labeled data (~38k samples) Languages: English (Indian and global political context) License: MIT Author: peekayitachi (Pranav) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations This model reflects the labeling choices and distribution of the training data. It may: Overfit to news-style text and miss subtle bias in blogs/social media Be less accurate on texts that are neutral in tone or multi-opinionated Reflect U.S./Indian-centric definitions of political categories ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed]
mixxxeee/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_humming_porcupine
mixxxeee
2025-04-22T23:14:46Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am aquatic humming porcupine", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-21T19:07:02Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_humming_porcupine tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am aquatic humming porcupine - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_humming_porcupine This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mixxxeee/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_humming_porcupine", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cindyyyy/tuning
cindyyyy
2025-04-22T23:07:26Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T11:19:01Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: tuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tuning This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3260 - Accuracy: 0.9199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3735 | 1.0 | 1250 | 0.2887 | 0.9132 | | 0.2082 | 2.0 | 2500 | 0.3260 | 0.9199 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
dgambettaphd/M_llm3_gen5_run0_W_doc1000_synt64_tot128_SYNLAST
dgambettaphd
2025-04-22T23:07:08Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-22T23:06:53Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HanningZhang/Qwen2.5-Math-7B-raft-plusplus_em-sample1n8-sample8-filter1.0-insufficient0.0-a0.001-b2.0-iter12
HanningZhang
2025-04-22T23:02:02Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T22:59:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cnaidu402/roberta-large-peft-lora
cnaidu402
2025-04-22T22:58:49Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-22T01:39:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
objfilm659/obaida
objfilm659
2025-04-22T22:55:50Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-22T22:55:50Z
--- license: apache-2.0 ---
mlx-community/GLM-4-32B-0414-8bit
mlx-community
2025-04-22T22:52:46Z
0
0
mlx
[ "mlx", "safetensors", "glm4", "text-generation", "conversational", "zh", "en", "base_model:THUDM/GLM-4-32B-0414", "base_model:quantized:THUDM/GLM-4-32B-0414", "license:mit", "8-bit", "region:us" ]
text-generation
2025-04-22T21:44:51Z
--- license: mit language: - zh - en pipeline_tag: text-generation library_name: mlx tags: - mlx base_model: THUDM/GLM-4-32B-0414 --- # mlx-community/GLM-4-32B-0414-8bit This model [mlx-community/GLM-4-32B-0414-8bit](https://huggingface.co/mlx-community/GLM-4-32B-0414-8bit) was converted to MLX format from [THUDM/GLM-4-32B-0414](https://huggingface.co/THUDM/GLM-4-32B-0414) using mlx-lm version **0.23.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/GLM-4-32B-0414-8bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
HanningZhang/Qwen2.5-Math-7B-raft-plusplus_em-sample1n8-sample8-filter1.0-insufficient0.0-a0.001-b2.0-iter11
HanningZhang
2025-04-22T22:49:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T22:46:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hartunka/distilbert_rand_20_v2_qnli
Hartunka
2025-04-22T22:48:08Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_rand_20_v2", "base_model:finetune:Hartunka/distilbert_rand_20_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T22:36:13Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_rand_20_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert_rand_20_v2_qnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE QNLI type: glue args: qnli metrics: - name: Accuracy type: accuracy value: 0.6357312831777412 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_rand_20_v2_qnli This model is a fine-tuned version of [Hartunka/distilbert_rand_20_v2](https://huggingface.co/Hartunka/distilbert_rand_20_v2) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6374 - Accuracy: 0.6357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.664 | 1.0 | 410 | 0.6415 | 0.6291 | | 0.6251 | 2.0 | 820 | 0.6374 | 0.6357 | | 0.5585 | 3.0 | 1230 | 0.6591 | 0.6286 | | 0.4549 | 4.0 | 1640 | 0.7202 | 0.6341 | | 0.3405 | 5.0 | 2050 | 0.8814 | 0.6301 | | 0.2441 | 6.0 | 2460 | 1.0931 | 0.6310 | | 0.1814 | 7.0 | 2870 | 1.2922 | 0.6315 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
Duruo/gemma-3-finetune
Duruo
2025-04-22T22:46:39Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:finetune:unsloth/gemma-3-1b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T22:43:09Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Duruo - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
cyberoleg/gemma-3-12b-it-reasoning-v3
cyberoleg
2025-04-22T22:46:20Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-12b-it", "base_model:finetune:unsloth/gemma-3-12b-it", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T22:46:17Z
--- base_model: unsloth/gemma-3-12b-it tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** cyberoleg - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
genki10/BERT_V8_sp20_lw10_ex50_lo00_k7_k7_fold4
genki10
2025-04-22T22:45:14Z
0
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T22:26:36Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_V8_sp20_lw10_ex50_lo00_k7_k7_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_V8_sp20_lw10_ex50_lo00_k7_k7_fold4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2177 - Qwk: 0.3394 - Mse: 1.2177 - Rmse: 1.1035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 5 | 8.2791 | 0.0 | 8.2791 | 2.8773 | | No log | 2.0 | 10 | 3.8091 | 0.0103 | 3.8091 | 1.9517 | | No log | 3.0 | 15 | 1.7205 | 0.0677 | 1.7205 | 1.3117 | | No log | 4.0 | 20 | 1.0009 | 0.0316 | 1.0009 | 1.0004 | | No log | 5.0 | 25 | 0.9545 | 0.0417 | 0.9545 | 0.9770 | | No log | 6.0 | 30 | 1.1989 | 0.1230 | 1.1989 | 1.0949 | | No log | 7.0 | 35 | 0.8688 | 0.3869 | 0.8688 | 0.9321 | | No log | 8.0 | 40 | 1.1936 | 0.3408 | 1.1936 | 1.0925 | | No log | 9.0 | 45 | 1.1411 | 0.3625 | 1.1411 | 1.0682 | | No log | 10.0 | 50 | 1.2043 | 0.3309 | 1.2043 | 1.0974 | | No log | 11.0 | 55 | 0.8488 | 0.4344 | 0.8488 | 0.9213 | | No log | 12.0 | 60 | 0.7555 | 0.4293 | 0.7555 | 0.8692 | | No log | 13.0 | 65 | 1.1456 | 0.3740 | 1.1456 | 1.0703 | | No log | 14.0 | 70 | 0.9426 | 0.4768 | 0.9426 | 0.9709 | | No log | 15.0 | 75 | 1.0309 | 0.4504 | 1.0309 | 1.0153 | | No log | 16.0 | 80 | 1.5214 | 0.3044 | 1.5214 | 1.2335 | | No log | 17.0 | 85 | 1.3251 | 0.3349 | 1.3251 | 1.1512 | | No log | 18.0 | 90 | 1.0881 | 0.4119 | 1.0881 | 1.0431 | | No log | 19.0 | 95 | 1.8424 | 0.2338 | 1.8424 | 1.3574 | | No log | 20.0 | 100 | 0.8930 | 0.4290 | 0.8930 | 0.9450 | | No log | 21.0 | 105 | 1.1340 | 0.3707 | 1.1340 | 1.0649 | | No log | 22.0 | 110 | 1.0547 | 0.3785 | 1.0547 | 1.0270 | | No log | 23.0 | 115 | 1.0583 | 0.3605 | 1.0583 | 1.0287 | | No log | 24.0 | 120 | 1.2595 | 0.3424 | 1.2595 | 1.1223 | | No log | 25.0 | 125 | 1.3513 | 0.3293 | 1.3513 | 1.1625 | | No log | 26.0 | 130 | 1.6459 | 0.2699 | 1.6459 | 1.2829 | | No log | 27.0 | 135 | 1.4521 | 0.2628 | 1.4521 | 1.2050 | | No log | 28.0 | 140 | 0.9124 | 0.3823 | 0.9124 | 0.9552 | | No log | 29.0 | 145 | 1.5234 | 0.2652 | 1.5234 | 1.2343 | | No log | 30.0 | 150 | 1.5545 | 0.2699 | 1.5545 | 1.2468 | | No log | 31.0 | 155 | 1.2995 | 0.3158 | 1.2995 | 1.1399 | | No log | 32.0 | 160 | 1.2897 | 0.3421 | 1.2897 | 1.1357 | | No log | 33.0 | 165 | 1.5345 | 0.2790 | 1.5345 | 1.2388 | | No log | 34.0 | 170 | 1.4441 | 0.2943 | 1.4441 | 1.2017 | | No log | 35.0 | 175 | 1.6854 | 0.2495 | 1.6854 | 1.2982 | | No log | 36.0 | 180 | 1.4345 | 0.2945 | 1.4345 | 1.1977 | | No log | 37.0 | 185 | 1.2593 | 0.3278 | 1.2593 | 1.1222 | | No log | 38.0 | 190 | 1.1134 | 0.3588 | 1.1134 | 1.0552 | | No log | 39.0 | 195 | 1.4305 | 0.2843 | 1.4305 | 1.1960 | | No log | 40.0 | 200 | 1.5121 | 0.2711 | 1.5121 | 1.2297 | | No log | 41.0 | 205 | 1.4984 | 0.2736 | 1.4984 | 1.2241 | | No log | 42.0 | 210 | 1.4987 | 0.2750 | 1.4987 | 1.2242 | | No log | 43.0 | 215 | 1.3254 | 0.2917 | 1.3254 | 1.1513 | | No log | 44.0 | 220 | 1.2177 | 0.3394 | 1.2177 | 1.1035 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
locuslab/base-smollm2-1.7b-score0_20p_123rephrase_mild_45ref_45web_ref6x-600B-step-250000
locuslab
2025-04-22T22:44:24Z
0
0
null
[ "pytorch", "llama", "model", "transformer", "smollm2", "license:mit", "region:us" ]
null
2025-04-22T22:38:33Z
--- version: main family: smollm2-1.7b model_name: score0_20p_123rephrase_mild_45ref_45web_ref6x-600B-step-250000 license: mit tags: - model - transformer - smollm2 --- # SmolLM2 score0_20p_123rephrase_mild_45ref_45web_ref6x-600B-step-250000 (Version: main) ## Model Details - **Architecture:** SmolLM2 - **Parameters:** 1.7B ## Training Configuration ```yaml optimizer: class_path: torch.optim.AdamW init_args: lr: 0.0005 weight_decay: 0.01 precision: bf16-mixed seed: 42 train: global_batch_size: 1024 max_seq_length: 2048 max_tokens: 600000000000 micro_batch_size: 8 ``` ## Model Loading and Revision System This repository hosts multiple revisions of the model. To load a specific revision, use the `revision` parameter. For example: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("locuslab/score0_20p_123rephrase_mild_45ref_45web_ref6x-600B-step-250000", revision="final") tokenizer = AutoTokenizer.from_pretrained("locuslab/score0_20p_123rephrase_mild_45ref_45web_ref6x-600B-step-250000", revision="final") ``` Replace `"final"` with the desired revision.
ngochan2k6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_flexible_giraffe
ngochan2k6
2025-04-22T22:41:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am freckled flexible giraffe", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T22:39:32Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_flexible_giraffe tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am freckled flexible giraffe - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_flexible_giraffe This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ngochan2k6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-freckled_flexible_giraffe", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rgarcia2304/sleepAI
rgarcia2304
2025-04-22T22:39:05Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-22T22:38:52Z
--- base_model: unsloth/llama-3-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** rgarcia2304 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SaddamHasanov213/Qwen2.5-3B-instruct-BankBotv2-GGUF
SaddamHasanov213
2025-04-22T22:38:57Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-22T22:37:25Z
--- base_model: unsloth/Qwen2.5-3B-instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** SaddamHasanov213 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MercuraTech/reranker-10k-base
MercuraTech
2025-04-22T22:38:13Z
0
0
null
[ "safetensors", "bert", "text-classification", "region:us" ]
text-classification
2025-04-22T20:59:10Z
--- pipeline_tag: "text-classification" --- # MercuraTech/reranker-10k-base A German cross-encoder reranker fine-tuned on MercuraTech/reranker_10k.
matrixportal/Metafor
matrixportal
2025-04-22T22:36:40Z
0
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "matrixportal", "conversational", "tr", "en", "dataset:matrixportal/Turkish-Poems-Alpaca", "base_model:matrixportal/Turkce-LLM", "base_model:finetune:matrixportal/Turkce-LLM", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2025-04-22T19:10:03Z
--- base_model: matrixportal/Turkce-LLM language: - tr - en library_name: transformers license: apache-2.0 tags: - matrixportal inference: false datasets: - matrixportal/Turkish-Poems-Alpaca --- # matrixportal/Metafor **Model Açıklaması:** Bu model, `matrixportal/Turkce-LLM` tabanlı olarak aşağıdaki veri set(ler)iyle Türkçe dili ve kültürüne yönelik olarak LoRA yöntemiyle ince ayar uygulanarak geliştirilmiştir: - `matrixportal/Turkish-Poems-Alpaca` Bu eğitim ile modelin Türkçe dilinde daha doğal, bağlama duyarlı ve etkili yanıtlar üretebilmesi hedeflenmiştir. Çalışma, açık kaynak topluluğuna katkı sağlamayı ve Türkçe doğal dil işleme alanında gelişimi desteklemeyi amaçlamaktadır.
Jahleel/terry-melody-lora
Jahleel
2025-04-22T22:35:52Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-22T22:20:33Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TMF --- # Terry Melody Lora <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TMF` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TMF", "lora_weights": "https://huggingface.co/Jahleel/terry-melody-lora/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Jahleel/terry-melody-lora', weight_name='lora.safetensors') image = pipeline('TMF').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Jahleel/terry-melody-lora/discussions) to add images that show off what you’ve made with this LoRA.
jaco-bro/Dia-1.6B
jaco-bro
2025-04-22T22:34:45Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2025-04-22T22:28:34Z
--- license: apache-2.0 ---
Hartunka/distilbert_rand_20_v2_cola
Hartunka
2025-04-22T22:34:44Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/distilbert_rand_20_v2", "base_model:finetune:Hartunka/distilbert_rand_20_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T22:33:35Z
--- library_name: transformers language: - en base_model: Hartunka/distilbert_rand_20_v2 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation - accuracy model-index: - name: distilbert_rand_20_v2_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 - name: Accuracy type: accuracy value: 0.6912751793861389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_rand_20_v2_cola This model is a fine-tuned version of [Hartunka/distilbert_rand_20_v2](https://huggingface.co/Hartunka/distilbert_rand_20_v2) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6151 - Matthews Correlation: 0.0 - Accuracy: 0.6913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:| | 0.6147 | 1.0 | 34 | 0.6151 | 0.0 | 0.6913 | | 0.5912 | 2.0 | 68 | 0.6195 | -0.0163 | 0.6884 | | 0.5468 | 3.0 | 102 | 0.6210 | 0.0543 | 0.6865 | | 0.4978 | 4.0 | 136 | 0.6938 | 0.0802 | 0.6424 | | 0.4357 | 5.0 | 170 | 0.7163 | 0.0813 | 0.6548 | | 0.3843 | 6.0 | 204 | 0.8270 | 0.0845 | 0.6529 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
hussamalafandi/smollm2-sft-rewrite
hussamalafandi
2025-04-22T22:32:17Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T00:31:23Z
--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: smollm2-sft-results tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for smollm2-sft-results This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline prompt = [ {'content': "You're an AI assistant for text re-writing. Rewrite the input " 'text to make it more concise while preserving its core meaning.', 'role': 'system'}, {'content': 'Hey Alex,\n' '\n' "I hope you're doing well! It's been a while since we met at the " 'film festival last year. I was the one with the short film about ' "the old abandoned factory. Anyway, I'm reaching out because I'm " 'currently working on my thesis film project and I could really ' 'use some advice on cinematography. I remember our conversation ' 'about visual storytelling and I was hoping you might have some ' 'tips or insights to share.\n' '\n' 'My film is a drama set in a small town, and I want to capture ' 'the mood and atmosphere of the location through my ' "cinematography. I'm planning to shoot on location next month, " "but I'm still trying to figure out the best way to approach it. " 'If you have any suggestions or resources you could point me to, ' 'I would be incredibly grateful.\n' '\n' "Also, I heard from a mutual friend that you're having a " 'photography exhibition soon. Congratulations! I would love to ' "attend if you don't mind sending me the details.\n" '\n' 'Thanks in advance for any help you can provide. I really ' 'appreciate it.\n' '\n' 'Best,\n' 'Jordan', 'role': 'user'}] generator = pipeline("text-generation", model="hussamalafandi/smollm2-sft-rewrite", device="cuda") output = generator(prompt, max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hussam-alafandi/huggingface/runs/cyldjq1f) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ShadowHacker110/llama-3.1-instruct
ShadowHacker110
2025-04-22T22:31:05Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "trl", "sft", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-04-22T22:30:51Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit library_name: transformers model_name: llama-3.1-instruct tags: - generated_from_trainer - unsloth - trl - sft licence: license --- # Model Card for llama-3.1-instruct This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ShadowHacker110/llama-3.1-instruct", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jessicata/Llama-3.1-8B-Q8_0-GGUF
jessicata
2025-04-22T22:30:59Z
0
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.1-8B", "base_model:quantized:meta-llama/Llama-3.1-8B", "license:llama3.1", "endpoints_compatible", "region:us" ]
text-generation
2025-04-22T22:30:19Z
--- base_model: meta-llama/Llama-3.1-8B language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.1 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\ \ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\ \ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\ \ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\ \ create derivative works of, and make modifications to the Llama Materials.\nb.\ \ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\ \ (or any derivative works thereof), or a product or service (including another\ \ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\ \ with any such Llama Materials; and (B) prominently display “Built with Llama”\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the Llama Materials or any outputs or results of the Llama Materials\ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\ \ or made available, you shall also include “Llama” at the beginning of any such\ \ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\ \ from a Licensee as part of an integrated end user product, then Section 2 of\ \ this Agreement will not apply to you.\niii. You must retain in all copies of the\ \ Llama Materials that you distribute the following attribution notice within a\ \ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\ \ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\ \ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\ \ and regulations (including trade compliance laws and regulations) and adhere to\ \ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\ \ which is hereby incorporated by reference into this Agreement.\n2. Additional\ \ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\ \ users of the products or services made available by or for Licensee, or Licensee’s\ \ affiliates, is greater than 700 million monthly active users in the preceding\ \ calendar month, you must request a license from Meta, which Meta may grant to\ \ you in its sole discretion, and you are not authorized to exercise any of the\ \ rights under this Agreement unless or until Meta otherwise expressly grants you\ \ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\ \ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\ \ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\ \ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\ \ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\ \ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\ \ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\ \ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\ \ trademark licenses are granted under this Agreement, and in connection with the\ \ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\ \ associated with the other or any of its affiliates, except as required for reasonable\ \ and customary use in describing and redistributing the Llama Materials or as set\ \ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\ \ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\ \ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\ \ ). All goodwill arising out of your use of the Mark will inure to the benefit\ \ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\ \ by or for Meta, with respect to any derivative works and modifications of the\ \ Llama Materials that are made by you, as between you and Meta, you are and will\ \ be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement of\ \ intellectual property or other rights owned or licensable by you, then any licenses\ \ granted to you under this Agreement shall terminate as of the date such litigation\ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\ \ and against any claim by any third party arising out of or related to your use\ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\ \ this Agreement will commence upon your acceptance of this Agreement or access\ \ to the Llama Materials and will continue in full force and effect until terminated\ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 5.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 7. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 8. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement\n4. Fail to appropriately disclose to\ \ end users any known dangers of your AI system\nPlease report any violation of\ \ this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # jessicata/Llama-3.1-8B-Q8_0-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.1-8B`](https://huggingface.co/meta-llama/Llama-3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.1-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jessicata/Llama-3.1-8B-Q8_0-GGUF --hf-file llama-3.1-8b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jessicata/Llama-3.1-8B-Q8_0-GGUF --hf-file llama-3.1-8b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jessicata/Llama-3.1-8B-Q8_0-GGUF --hf-file llama-3.1-8b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jessicata/Llama-3.1-8B-Q8_0-GGUF --hf-file llama-3.1-8b-q8_0.gguf -c 2048 ```
MaziyarPanahi/cogito-v1-preview-llama-70B-GGUF
MaziyarPanahi
2025-04-22T22:30:25Z
0
0
null
[ "gguf", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:deepcogito/cogito-v1-preview-llama-70B", "base_model:quantized:deepcogito/cogito-v1-preview-llama-70B", "region:us", "conversational" ]
text-generation
2025-04-21T14:26:41Z
--- base_model: deepcogito/cogito-v1-preview-llama-70B inference: false model_creator: deepcogito model_name: cogito-v1-preview-llama-70B-GGUF pipeline_tag: text-generation quantized_by: MaziyarPanahi tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation --- # [MaziyarPanahi/cogito-v1-preview-llama-70B-GGUF](https://huggingface.co/MaziyarPanahi/cogito-v1-preview-llama-70B-GGUF) - Model creator: [deepcogito](https://huggingface.co/deepcogito) - Original model: [deepcogito/cogito-v1-preview-llama-70B](https://huggingface.co/deepcogito/cogito-v1-preview-llama-70B) ## Description [MaziyarPanahi/cogito-v1-preview-llama-70B-GGUF](https://huggingface.co/MaziyarPanahi/cogito-v1-preview-llama-70B-GGUF) contains GGUF format model files for [deepcogito/cogito-v1-preview-llama-70B](https://huggingface.co/deepcogito/cogito-v1-preview-llama-70B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
iEsmeralda/mrm8488-finetuned-ner-tech
iEsmeralda
2025-04-22T22:29:33Z
107
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "NLP", "NER", "Spanish", "TECH", "es", "dataset:iEsmeralda/ner_tech_dataset_bio", "base_model:mrm8488/bert-spanish-cased-finetuned-ner", "base_model:finetune:mrm8488/bert-spanish-cased-finetuned-ner", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-04-19T03:39:42Z
--- library_name: transformers tags: - NLP - NER - Spanish - TECH datasets: - iEsmeralda/ner_tech_dataset_bio language: - es base_model: - mrm8488/bert-spanish-cased-finetuned-ner --- ### Model Description Este modelo fue entrenado para reconocer entidades como "procesamiento de lenguaje natural" como una etiqueta TECH. Esto es útil debido a que el procesamiento de lenguaje natural reperesenta técnicas, y tiene más valor al ser reconocido como tal, que como palabras separadas como "procesamiento", "de", "lenguaje", "natural". - **Developed by:** iEsmeralda - **Shared by:** iEsmeralda - **Model type:** Named Entity Recognition (NER) - **Language(s) (NLP):** Spanish - **Finetuned from model:** mrm8488/bert-spanish-cased-finetuned-ner ### Model Sources <!-- Provide the basic links for the model. --> - **Dataset:** iEsmeralda/ner_tech_dataset_bio ## How to Get Started with the Model from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline modelo = AutoModelForTokenClassification.from_pretrained("iEsmeralda/mrm8488-finetuned-ner-tech") tokenizer = AutoTokenizer.from_pretrained("iEsmeralda/mrm8488-finetuned-ner-tech") ner_pipeline = pipeline("ner", model=modelo, tokenizer=tokenizer, aggregation_strategy="simple")
RedHatAI/Qwen2.5-VL-3B-Instruct-FP8-Dynamic
RedHatAI
2025-04-22T22:26:30Z
120
1
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "vllm", "vision", "fp8", "conversational", "en", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
image-text-to-text
2025-02-06T16:25:56Z
--- tags: - vllm - vision - fp8 license: apache-2.0 license_link: >- https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md language: - en base_model: Qwen/Qwen2.5-VL-3B-Instruct library_name: transformers --- # Qwen2.5-VL-3B-Instruct-FP8-Dynamic ## Model Overview - **Model Architecture:** Qwen2.5-VL-3B-Instruct - **Input:** Vision-Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** FP8 - **Activation quantization:** FP8 - **Release Date:** 2/24/2025 - **Version:** 1.0 - **Model Developers:** Neural Magic Quantized version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct). ### Model Optimizations This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2. ## Deployment ### Use with vLLM This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from vllm.assets.image import ImageAsset from vllm import LLM, SamplingParams # prepare model llm = LLM( model="neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic", trust_remote_code=True, max_model_len=4096, max_num_seqs=2, ) # prepare inputs question = "What is the content of this image?" inputs = { "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n", "multi_modal_data": { "image": ImageAsset("cherry_blossom").pil_image.convert("RGB") }, } # generate response print("========== SAMPLE GENERATION ==============") outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64)) print(f"PROMPT : {outputs[0].prompt}") print(f"RESPONSE: {outputs[0].outputs[0].text}") print("==========================================") ``` vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog. <details> <summary>Model Creation Code</summary> ```python import requests import torch from PIL import Image from transformers import AutoProcessor from llmcompressor.transformers import oneshot from llmcompressor.transformers.tracing import ( TraceableQwen2_5_VLForConditionalGeneration, ) from llmcompressor.modifiers.quantization import QuantizationModifier # Load model. model_id = Qwen/Qwen2.5-VL-3B-Instruct model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained( model_id, device_map="auto", torch_dtype="auto" ) processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True) # Recipe recipe = [ QuantizationModifier( targets="Linear", scheme="FP8_DYNAMIC", sequential_targets=["MistralDecoderLayer"], ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"], ), ] SAVE_DIR=f"{model_id.split('/')[1]}-FP8-Dynamic" # Perform oneshot oneshot( model=model, recipe=recipe, trust_remote_code_model=True, output_dir=SAVE_DIR ) ``` </details> ## Evaluation The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands: <details> <summary>Evaluation Commands</summary> ### Vision Tasks - vqav2 - docvqa - mathvista - mmmu - chartqa ``` vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7 python -m eval.run eval_vllm \ --model_name neuralmagic/pixtral-12b-quantized.w8a8 \ --url http://0.0.0.0:8000 \ --output_dir ~/tmp \ --eval_name <vision_task_name> ``` ### Text-based Tasks #### MMLU ``` lm_eval \ --model vllm \ --model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \ --tasks mmlu \ --num_fewshot 5 \ --batch_size auto \ --output_path output_dir ``` #### MGSM ``` lm_eval \ --model vllm \ --model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \ --tasks mgsm_cot_native \ --apply_chat_template \ --num_fewshot 0 \ --batch_size auto \ --output_path output_dir ``` </details> ### Accuracy <table> <thead> <tr> <th>Category</th> <th>Metric</th> <th>Qwen/Qwen2.5-VL-3B-Instruct</th> <th>nm-testing/Qwen2.5-VL-3B-Instruct-FP8-Dynamic</th> <th>Recovery (%)</th> </tr> </thead> <tbody> <tr> <td rowspan="6"><b>Vision</b></td> <td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td> <td>44.56</td> <td>45.78</td> <td>102.74%</td> </tr> <tr> <td>VQAv2 (val)<br><i>vqa_match</i></td> <td>75.94</td> <td>76.22</td> <td>100.37%</td> </tr> <tr> <td>DocVQA (val)<br><i>anls</i></td> <td>92.53</td> <td>92.40</td> <td>99.86%</td> </tr> <tr> <td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td> <td>81.20</td> <td>80.72</td> <td>99.41%</td> </tr> <tr> <td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td> <td>54.15</td> <td>53.25</td> <td>98.34%</td> </tr> <tr> <td><b>Average Score</b></td> <td><b>69.28</b></td> <td><b>69.67</b></td> <td><b>100.56%</b></td> </tr> <tr> <td rowspan="2"><b>Text</b></td> <td>MGSM (CoT)</td> <td>43.69</td> <td>43.14</td> <td>98.74%</td> </tr> <tr> <td>MMLU (5-shot)</td> <td>65.32</td> <td>65.03</td> <td>99.56%</td> </tr> </tbody> </table> ## Inference Performance This model achieves up to 1.10x speedup in single-stream deployment and up to 1.32x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario. The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm). <details> <summary>Benchmarking Command</summary> ``` guidellm --model neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server ``` </details> ### Single-stream performance (measured with vLLM version 0.7.2) <table border="1" class="dataframe"> <thead> <tr> <th></th> <th></th> <th></th> <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th> <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th> <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th> </tr> <tr> <th>Hardware</th> <th>Model</th> <th>Average Cost Reduction</th> <th>Latency (s)</th> <th>Queries Per Dollar</th> <th>Latency (s)<th> <th>Queries Per Dollar</th> <th>Latency (s)</th> <th>Queries Per Dollar</th> </tr> </thead> <tbody style="text-align: center"> <tr> <th rowspan="3" valign="top">A6000x1</th> <th>Qwen/Qwen2.5-VL-3B-Instruct</th> <td></td> <td>3.1</td> <td>1454</td> <td>1.8</td> <td>2546</td> <td>1.7</td> <td>2610</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th> <td>1.27</td> <td>2.6</td> <td>1708</td> <td>1.3</td> <td>3340</td> <td>1.3</td> <td>3459</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th> <td>1.57</td> <td>2.4</td> <td>1886</td> <td>1.0</td> <td>4409</td> <td>1.0</td> <td>4409</td> </tr> <tr> <th rowspan="3" valign="top">A100x1</th> <th>Qwen/Qwen2.5-VL-3B-Instruct</th> <td></td> <td>2.2</td> <td>920</td> <td>1.3</td> <td>1603</td> <td>1.2</td> <td>1636</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th> <td>1.09</td> <td>2.1</td> <td>975</td> <td>1.2</td> <td>1743</td> <td>1.1</td> <td>1814</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th> <td>1.20</td> <td>2.0</td> <td>1011</td> <td>1.0</td> <td>2015</td> <td>1.0</td> <td>2012</td> </tr> <tr> <th rowspan="3" valign="top">H100x1</th> <th>Qwen/Qwen2.5-VL-3B-Instruct</th> <td>1.5</td> <td>740</td> <td>0.9</td> <td>1221</td> <td>0.9</td> <td>1276</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic</th> <td>1.06</td> <td>1.4</td> <td>768</td> <td>0.9</td> <td>1276</td> <td>0.8</td> <td>1399</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th> <td>1.24</td> <td>0.9</td> <td>1219</td> <td>0.9</td> <td>1270</td> <td>0.8</td> <td>1304</td> </tr> </tbody> </table> **Use case profiles: Image Size (WxH) / prompt tokens / generation tokens **QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025). ### Multi-stream asynchronous performance (measured with vLLM version 0.7.2) <table border="1" class="dataframe"> <thead> <tr> <th></th> <th></th> <th></th> <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th> <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th> <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th> </tr> <tr> <th>Hardware</th> <th>Model</th> <th>Average Cost Reduction</th> <th>Maximum throughput (QPS)</th> <th>Queries Per Dollarv</th> <th>Maximum throughput (QPS)</th> <th>Queries Per Dollar</th> <th>Maximum throughput (QPS)</th> <th>Queries Per Dollar</th> </tr> </thead> <tbody style="text-align: center"> <tr> <th rowspan="3" valign="top">A6000x1</th> <th>Qwen/Qwen2.5-VL-3B-Instruct</th> <td></td> <td>0.5</td> <td>2405</td> <td>2.6</td> <td>11889</td> <td>2.9</td> <td>12909</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th> <td>1.26</td> <td>0.6</td> <td>2725</td> <td>3.4</td> <td>15162</td> <td>3.9</td> <td>17673</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th> <td>1.39</td> <td>0.6</td> <td>2548</td> <td>3.9</td> <td>17437</td> <td>4.7</td> <td>21223</td> </tr> <tr> <th rowspan="3" valign="top">A100x1</th> <th>Qwen/Qwen2.5-VL-3B-Instruct</th> <td></td> <td>0.8</td> <td>1663</td> <td>3.9</td> <td>7899</td> <td>4.4</td> <td>8924</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w8a8</th> <td>1.06</td> <td>0.9</td> <td>1734</td> <td>4.2</td> <td>8488</td> <td>4.7</td> <td>9548</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th> <td>1.10</td> <td>0.9</td> <td>1775</td> <td>4.2</td> <td>8540</td> <td>5.1</td> <td>10318</td> </tr> <tr> <th rowspan="3" valign="top">H100x1</th> <th>Qwen/Qwen2.5-VL-3B-Instruct</th> <td></td> <td>1.1</td> <td>1188</td> <td>4.3</td> <td>4656</td> <td>4.3</td> <td>4676</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-FP8-Dynamic</th> <td>1.15</td> <td>1.4</td> <td>1570</td> <td>4.3</td> <td>4676</td> <td>4.8</td> <td>5220</td> </tr> <tr> <th>neuralmagic/Qwen2.5-VL-3B-Instruct-quantized.w4a16</th> <td>1.96</td> <td>4.2</td> <td>4598</td> <td>4.1</td> <td>4505</td> <td>4.4</td> <td>4838</td> </tr> </tbody> </table> **Use case profiles: Image Size (WxH) / prompt tokens / generation tokens **QPS: Queries per second. **QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
genki10/BERT_V8_sp20_lw10_ex50_lo00_k7_k7_fold3
genki10
2025-04-22T22:26:29Z
0
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-22T22:06:24Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_V8_sp20_lw10_ex50_lo00_k7_k7_fold3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_V8_sp20_lw10_ex50_lo00_k7_k7_fold3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9898 - Qwk: 0.2642 - Mse: 0.9899 - Rmse: 0.9949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 5 | 9.1006 | 0.0 | 9.0991 | 3.0165 | | No log | 2.0 | 10 | 4.4652 | 0.0133 | 4.4639 | 2.1128 | | No log | 3.0 | 15 | 2.2290 | 0.0426 | 2.2283 | 1.4927 | | No log | 4.0 | 20 | 1.1343 | 0.0102 | 1.1338 | 1.0648 | | No log | 5.0 | 25 | 1.4036 | 0.0201 | 1.4029 | 1.1844 | | No log | 6.0 | 30 | 1.1699 | 0.0326 | 1.1694 | 1.0814 | | No log | 7.0 | 35 | 1.4393 | 0.0529 | 1.4385 | 1.1994 | | No log | 8.0 | 40 | 0.9654 | 0.1258 | 0.9653 | 0.9825 | | No log | 9.0 | 45 | 1.9505 | 0.0493 | 1.9499 | 1.3964 | | No log | 10.0 | 50 | 0.8296 | 0.2070 | 0.8299 | 0.9110 | | No log | 11.0 | 55 | 1.5114 | 0.1616 | 1.5110 | 1.2292 | | No log | 12.0 | 60 | 0.8892 | 0.2362 | 0.8892 | 0.9430 | | No log | 13.0 | 65 | 0.8347 | 0.2131 | 0.8350 | 0.9138 | | No log | 14.0 | 70 | 1.0932 | 0.2310 | 1.0934 | 1.0457 | | No log | 15.0 | 75 | 0.7942 | 0.2795 | 0.7945 | 0.8914 | | No log | 16.0 | 80 | 0.9512 | 0.2884 | 0.9514 | 0.9754 | | No log | 17.0 | 85 | 1.3070 | 0.2158 | 1.3070 | 1.1433 | | No log | 18.0 | 90 | 1.6531 | 0.0983 | 1.6529 | 1.2856 | | No log | 19.0 | 95 | 0.8388 | 0.2749 | 0.8392 | 0.9161 | | No log | 20.0 | 100 | 1.2164 | 0.2192 | 1.2164 | 1.1029 | | No log | 21.0 | 105 | 1.2512 | 0.2243 | 1.2512 | 1.1186 | | No log | 22.0 | 110 | 1.8776 | 0.0771 | 1.8773 | 1.3701 | | No log | 23.0 | 115 | 1.0380 | 0.2447 | 1.0383 | 1.0189 | | No log | 24.0 | 120 | 1.1613 | 0.2257 | 1.1615 | 1.0777 | | No log | 25.0 | 125 | 1.3777 | 0.1994 | 1.3778 | 1.1738 | | No log | 26.0 | 130 | 1.9082 | 0.0908 | 1.9080 | 1.3813 | | No log | 27.0 | 135 | 1.1940 | 0.2329 | 1.1941 | 1.0928 | | No log | 28.0 | 140 | 1.2159 | 0.2202 | 1.2161 | 1.1028 | | No log | 29.0 | 145 | 1.0354 | 0.2676 | 1.0357 | 1.0177 | | No log | 30.0 | 150 | 1.3963 | 0.1502 | 1.3965 | 1.1817 | | No log | 31.0 | 155 | 1.5812 | 0.0914 | 1.5814 | 1.2575 | | No log | 32.0 | 160 | 1.3953 | 0.1390 | 1.3955 | 1.1813 | | No log | 33.0 | 165 | 1.2048 | 0.2138 | 1.2050 | 1.0977 | | No log | 34.0 | 170 | 1.4473 | 0.1296 | 1.4474 | 1.2031 | | No log | 35.0 | 175 | 1.8392 | 0.0862 | 1.8392 | 1.3562 | | No log | 36.0 | 180 | 1.5515 | 0.1059 | 1.5516 | 1.2456 | | No log | 37.0 | 185 | 1.1378 | 0.2015 | 1.1379 | 1.0667 | | No log | 38.0 | 190 | 1.1149 | 0.2176 | 1.1150 | 1.0559 | | No log | 39.0 | 195 | 1.2742 | 0.1633 | 1.2743 | 1.1288 | | No log | 40.0 | 200 | 1.3904 | 0.1262 | 1.3904 | 1.1792 | | No log | 41.0 | 205 | 1.3087 | 0.1851 | 1.3086 | 1.1439 | | No log | 42.0 | 210 | 1.1071 | 0.2220 | 1.1072 | 1.0522 | | No log | 43.0 | 215 | 1.2328 | 0.1804 | 1.2329 | 1.1104 | | No log | 44.0 | 220 | 1.4088 | 0.1530 | 1.4088 | 1.1869 | | No log | 45.0 | 225 | 1.5554 | 0.1131 | 1.5552 | 1.2471 | | No log | 46.0 | 230 | 0.9898 | 0.2642 | 0.9899 | 0.9949 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
luencedano/luencedanom
luencedano
2025-04-22T22:20:57Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-22T21:23:56Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
sam-rei/ppo-LunarLander-v2
sam-rei
2025-04-22T22:18:20Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-22T22:17:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.70 +/- 20.98 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
maramora/moraromero
maramora
2025-04-22T22:14:43Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-22T21:36:34Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---