modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-22 00:45:16
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
570 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-22 00:43:28
card
stringlengths
11
1.01M
JPLabsAI/llava-finetuning-final_training
JPLabsAI
2025-09-15T23:38:30Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mllama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-15T23:38:18Z
--- base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mllama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** JPLabsAI - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
csikasote/mms-1b-all-bemgen-combined-m50f100-52-DAT-3e-1
csikasote
2025-09-15T23:33:14Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-15T23:07:15Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m50f100-52-DAT-3e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m50f100-52-DAT-3e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.3082 - Cer: 0.0865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 52 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:------:| | 2.4104 | 0.5618 | 100 | 2.9395 | 0.9998 | | 0.8752 | 1.1236 | 200 | 0.6134 | 0.1879 | | 0.5707 | 1.6854 | 300 | 0.3458 | 0.0976 | | 0.5842 | 2.2472 | 400 | 0.3081 | 0.0865 | | 0.6233 | 2.8090 | 500 | 0.2918 | 0.0795 | | 0.6648 | 3.3708 | 600 | 0.2848 | 0.0788 | | 0.6471 | 3.9326 | 700 | 0.2853 | 0.0803 | | 0.6606 | 4.4944 | 800 | 0.2880 | 0.0806 | | 0.6603 | 5.0562 | 900 | 0.2893 | 0.0817 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
tommycik/ControlNetCannyNew
tommycik
2025-09-15T23:32:00Z
0
0
diffusers
[ "diffusers", "safetensors", "flux", "flux-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-15T17:21:48Z
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers license: other inference: true tags: - flux - flux-diffusers - text-to-image - diffusers - controlnet - diffusers-training - flux - flux-diffusers - text-to-image - diffusers - controlnet - diffusers-training --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-tommycik/ControlNetCannyNew These are controlnet weights trained on black-forest-labs/FLUX.1-dev with new type of conditioning. You can find some example images below. prompt: transparent cocktail galss with elegant stem and a double curved bowl on a white background ![images_0)](./images_0.png) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
harpertoken/harpertokenConvFT
harpertoken
2025-09-15T23:31:13Z
39
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "text-generation-inference", "conversational-ai", "en", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-02-23T20:37:01Z
--- license: mit language: - en base_model: - gpt2 tags: - text-generation-inference - conversational-ai - gpt2 metrics: - perplexity - bleu - f1 library_name: transformers --- # HarperToken ConvFT ## Model Details - **Model Name**: HarperToken ConvFT - **Base Model**: gpt2 - **Model Type**: GPT-2-based conversational AI model - **Max Sequence Length**: 1024 tokens ## Intended Use Generates human-like responses for chatbots, virtual assistants, and dialogue systems. ## Training Data The model was fine-tuned on the DailyDialog dataset, featuring: - **Training Examples**: 11,118 - **Validation Examples**: 1,000 - **Test Examples**: 1,000 ## Dataset Characteristics - **Description**: A high-quality, multi-turn dialogue dataset covering everyday topics. - **Features**: Includes dialogues, communication acts, and emotion annotations. - **Citation**: ``` @InProceedings{li2017dailydialog, author = {Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi}, title = {DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset}, booktitle = {Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)}, year = {2017} } ``` ## Training Configuration - **Learning Rate**: 2e-5 - **Batch Size**: 8 - **Number of Epochs**: 3 - **Weight Decay**: 0.01 ## Ethical Considerations Inherited from the GPT-2 base model and the DailyDialog dataset, this model may reflect biases or limitations present in its training data. Caution is advised when using it in sensitive contexts, as it could produce biased or inappropriate responses. ## How to Use ### Using the Model Directly ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained("harpertoken/harpertokenConvFT") tokenizer = AutoTokenizer.from_pretrained("harpertoken/harpertokenConvFT") # Prepare input input_text = "Hello, how are you?" inputs = tokenizer(input_text, return_tensors="pt") # Generate response outputs = model.generate(**inputs) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Using the Terminal Run the provided script to generate responses: ```bash python3 generate_response.py --input "Hello, how are you?" ``` ### Using the API **Check API Status:** ```bash curl http://localhost:8000/status ``` **Generate a Response:** ```bash curl -X POST http://localhost:8000/chat -H "Content-Type: application/json" -d '{"input_text": "Hello, how are you?"}' ``` ### Using FastAPI Documentation Interact with the API via the browser at: [http://localhost:8000/docs#/default/generate_response_chat_post](http://localhost:8000/docs#/default/generate_response_chat_post) ## Related Models - **harpertokenConvAI**: [https://huggingface.co/harpertoken/harpertokenConvAI](https://huggingface.co/harpertoken/harpertokenConvAI) - DistilBERT-based model for question answering. Note: This is not the base model for harpertokenConvFT due to incompatible architectures (DistilBERT vs GPT-2). - **Base Model**: This model is fine-tuned from GPT-2 ([openai/gpt2](https://huggingface.co/gpt2)). ## Model Differences harpertokenConvFT is a GPT-2 model for conversational AI, while harpertokenConvAI is a DistilBERT model for question answering. They have different architectures, tokenizers, and parameters, making fine-tuning between them impossible. For truthful info, refer to the config.json files.
emergentai/cancer-efficientnetb7-undersampling
emergentai
2025-09-15T23:30:26Z
0
0
keras
[ "keras", "medical", "cervical-cancer", "histopathology", "undersampling", "image-classification", "base_model:google/efficientnet-b7", "base_model:finetune:google/efficientnet-b7", "license:mit", "region:us" ]
image-classification
2025-09-15T21:59:15Z
--- license: mit metrics: - accuracy - precision - recall - f1 base_model: - google/efficientnet-b7 pipeline_tag: image-classification tags: - medical - cervical-cancer - histopathology - undersampling --- # Model Card: EfficientNet-B7 for Cervical Cancer Image Classification This model fine-tunes **EfficientNet-B7** for the task of binary cervical cancer image classification (Negative vs. Positive). It was trained using undersampling to handle class imbalance. --- ## Model Details - **Developed by:** Beijuka / Pathogen Lab - **Funded by:** STI - **Model type:** Convolutional Neural Network (CNN) - **Input type:** Histopathology images (600x600, RGB) - **Output type:** Binary classification (Negative, Positive) - **License:** MIT - **Finetuned from:** `google/efficientnet-b7` <!-- ### Model Sources - **Repository:** [Your HF Repo URL] - **Paper [optional]:** [If you want to link e.g., EfficientNet or related research] - **Demo [optional]:** [Streamlit/Gradio app if you plan one] --> --- ## Uses ### Direct Use - classification of cervical cancer images into Negative vs Positive cases. ### Downstream Use - Could be integrated into diagnostic support pipelines. - Adapted for related medical imaging classification tasks. ### Out-of-Scope Use - **Not** a replacement for professional medical diagnosis. - Should not be deployed clinically without regulatory approval. - Not suitable for non-cervical images. --- ## Bias, Risks, and Limitations - The dataset was undersampled โ†’ may affect generalizability. - Model performance varies by threshold (see below). - Limited dataset size (19 test images) means results may not generalize. - Potential domain shift if applied to different staining/preparation protocols. ### Recommendations - Validate on larger, more diverse datasets. - Carefully calibrate decision threshold depending on application (screening vs confirmatory). - Use alongside clinical expertise, not as a standalone tool. --- ## How to Get Started ```python from huggingface_hub import hf_hub_download from tensorflow import keras model_path = hf_hub_download( "Beijuka/cancer-efficientnetb7-undersampling", "cancer_efficientnetB7_undersampling.keras" ) model = keras.models.load_model(model_path) ```` --- ## Training Details ### Training Data * Histopathology images of cervical cancer (size 600x600, RGB). * Class imbalance addressed via **undersampling**: * Positive: 84 images * Negative: 100 images * Preprocessing: Normalization + resizing. ### Training Procedure * Optimizer: Adam * Loss: Binary Crossentropy * Batch size: 8 * Learning rate: 1e-3 (initial), 1e-5 (fine-tuning) * Epochs: 50 (initial), 20 (fine-tuning) * EarlyStopping and ModelCheckpoint callbacks used. ### Data Splits (70:20:10) * **Training:** 128 images (70 Negative, 29 Positive Post-stained, 29 Positive Pre-stained) * **Validation:** 37 images (20 Negative, 8 Positive Post-stained, 9 Positive Pre-stained) * **Test:** 19 images (10 Negative, 5 Positive Post-stained, 4 Positive Pre-stained) ### Hardware * GPU: Tesla T4 (14GB) * CUDA Version: 12.4 * Software: TensorFlow/Keras --- ## Evaluation ### Testing Data * Independent test set: 19 images (10 Negative, 9 Positive) ### Metrics at Threshold 0.5 * **Accuracy:** 0.7368 * **Precision (Positive):** 0.8333 * **Recall (Positive):** 0.5556 * **F1-Score (Positive):** 0.6667 #### Confusion Matrix ``` [[9, 1], [4, 5]] ``` #### Sensitivity / Specificity * Negative: Sensitivity 0.90, Specificity 0.56 * Positive: Sensitivity 0.56, Specificity 0.90 ### Threshold Analysis * Best balance observed near 0.45โ€“0.50 * Lower thresholds โ†’ higher recall, more false positives * Higher thresholds (>0.65) โ†’ model collapses to predicting only one class | Threshold | Accuracy | Precision | Recall | F1 | | --------- | -------- | --------- | ------ | ------ | | 0.00 | 0.4737 | 0.4737 | 1.0000 | 0.6429 | | 0.05 | 0.4737 | 0.4737 | 1.0000 | 0.6429 | | 0.10 | 0.5263 | 0.5000 | 1.0000 | 0.6667 | | 0.15 | 0.5263 | 0.5000 | 0.8889 | 0.6400 | | 0.20 | 0.6316 | 0.5714 | 0.8889 | 0.6957 | | 0.25 | 0.6316 | 0.5833 | 0.7778 | 0.6667 | | 0.30 | 0.6316 | 0.6250 | 0.5556 | 0.5882 | | 0.35 | 0.6316 | 0.6250 | 0.5556 | 0.5882 | | 0.40 | 0.6842 | 0.7143 | 0.5556 | 0.6250 | | 0.45 | 0.7368 | 0.8333 | 0.5556 | 0.6667 | | 0.50 | 0.7368 | 0.8333 | 0.5556 | 0.6667 | | 0.55 | 0.6842 | 0.8000 | 0.4444 | 0.5714 | | 0.60 | 0.6842 | 1.0000 | 0.3333 | 0.5000 | | 0.65 | 0.5263 | 0.0000 | 0.0000 | 0.0000 | | 0.70 | 0.5263 | 0.0000 | 0.0000 | 0.0000 | | 0.75 | 0.5263 | 0.0000 | 0.0000 | 0.0000 | | 0.80 | 0.5263 | 0.0000 | 0.0000 | 0.0000 | | 0.85 | 0.5263 | 0.0000 | 0.0000 | 0.0000 | | 0.90 | 0.5263 | 0.0000 | 0.0000 | 0.0000 | | 0.95 | 0.5263 | 0.0000 | 0.0000 | 0.0000 | ### Comparison of performance on Pre vs Post-stained images | Comparison | Accuracy | F1-Score | Precision |Recall | | ------------------------------ | -------- | -------- | -------- | ------- | | Pre-stained Prediction | 0.6087 | 0.2703 |0.1613 |0.8333| | Post-stained Prediction | 0.7474 | 0.3441 |0.2222 |0.7619| --- ## Technical Specifications ### Model Architecture * EfficientNet-B7 backbone * Final Dense layer with sigmoid activation for binary classification ### Compute Infrastructure * **Hardware:** Tesla T4 GPU * **Software:** TensorFlow/Keras ---
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757978726
svarekagerp
2025-09-15T23:26:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing reptilian bee", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T23:26:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing reptilian bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Dawntasy/B-Eye-O-Marker_CNN-V2
Dawntasy
2025-09-15T23:24:24Z
0
0
null
[ "onnx", "custom-cnn", "license:apache-2.0", "region:us" ]
null
2025-09-15T11:26:01Z
--- license: apache-2.0 --- B-Eye-O-Marker_CNN-V2 is a 20 million parameter convolutional neural network trained on 60K samples of eye disease data designed to diagnose, detect and identify eye illnesses with eyes as input. Check out our application at: https://dawnstoryrevelation.github.io/Our-Project/application.html. Steps: firstly go into the hugging face website and click model card once here read the following instructions about how to download the ONNX 1. click on files and versions 2. click the download button on the B-Eye-O-Marker_CNN-V2.onnx and save to your pc. 3. On Huggingface, download the ONNX file for the model. 4. Go onto application and click Load ONNX 5. you should now have access to the upload photo, camera and manualUse the model
GGUF-A-Lot/DeepHat-V1-7B-GGUF
GGUF-A-Lot
2025-09-15T23:23:19Z
0
0
null
[ "gguf", "arxiv:2309.00071", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-15T18:15:31Z
--- license: apache-2.0 --- <style> a.magic-link { color: #2a4db0; text-decoration: none; font-weight: bold; transition: all 0.3s ease; } a.magic-link:hover { color: #c0c0c0; text-shadow: 0 0 6px #ffd700, 0 0 12px #ffd700; } a.magic-link-purple { color: #3c048c; text-decoration: none; font-weight: bold; transition: all 0.3s ease; } a.magic-link-purple:hover { color: #1b013f; text-shadow: 0 0 6px #4701a8, 0 0 12px #4701a8; } .gguf-float-wrapper { max-width: 900px; margin: 0 auto; font-family: sans-serif; line-height: 1.6; display: flex; flex-wrap: wrap; gap: 20px; } .gguf-image { flex: 0 0 300px; } .gguf-image img { width: 100%; height: auto; } .gguf-text { flex: 1; min-width: 250px; } @media (max-width: 700px) { .gguf-float-wrapper { flex-direction: column; align-items: center; } .gguf-image { width: 80%; max-width: 250px; } .gguf-text { width: 100%; } } </style> <a href="https://huggingface.co/DeepHat/DeepHat-V1-7B" class="magic-link-purple">DeepHat-V1-7B</a> > [!TIP] > Quantized by 3Simplex using llama.cpp b907255f > From download to quant in 10 minutes using our magical <a href="https://github.com/3Simplex/Llama.Cpp-Toolbox" class="magic-link">LlamaCpp-Toolbox</a>! (2gbps Broadband, AMD 5800x & rx6900xt) <div class="gguf-float-wrapper"> <div class="gguf-image"> <img src="https://huggingface.co/spaces/GGUF-A-Lot/README/resolve/main/DeepHat-V1-7B.png" alt="Holy-Hand-GGUF-DeepHat-V1-7B"/> </div> <div class="gguf-text"> <br> <p><strong>Model Developer:</strong> Kindo - Deephat</p> <p><strong>Model Dates:</strong><br>September 2025</p> <p><strong>Data Freshness:</strong><br>September 2024</p> <p>The pretraining data has a cutoff date of September 2024.</p> </div> </div> ## Model Overview <br> ![DeepHat](https://huggingface.co/DeepHat/DeepHat-V1-7B/resolve/main/deephat_grey_logo.svg) <br> DeepHat is a model series that can be used for offensive and defensive cybersecurity. Access at [Deephat.ai](https://www.deephat.ai/) or go to [Kindo.ai](https://www.kindo.ai/) to create agents. # Community Join us on [Discord](https://discord.gg/8Ynkrcbk92) # Technical Overview DeepHat is a finetune of [Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B/), and inherits the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 7.61B - Number of Paramaters (Non-Embedding): 6.53B - Number of Layers: 28 - Number of Attention Heads (GQA): 28 for Q and 4 for KV - Context Length: Full 131,072 tokens - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts. ## Requirements We advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "DeepHat/DeepHat-V1-7B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "write a quick sort algorithm." messages = [ {"role": "system", "content": "You are DeepHat, created by Kindo.ai. You are a helpful assistant that is an expert in Cybersecurity and DevOps."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Long Texts The current `config.json` is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to `config.json` to enable YaRN: ```json { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` # License Apache-2.0 + DeepHat Extended Version ## DeepHat Extension to Apache-2.0 Licence: Usage Restrictions ``` You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individualโ€™s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ``` # Terms of Use By accessing and using this Artificial Intelligence (AI) model, you, the user, acknowledge and agree that you are solely responsible for your use of the model and its outcomes. You hereby agree to indemnify, defend, and hold harmless the creators, developers, and any affiliated persons or entities of this AI model from and against any and all claims, liabilities, damages, losses, costs, expenses, fees (including reasonable attorneys' fees and court costs) that may arise, directly or indirectly, from your use of the AI model. This AI model is provided "as is" and "as available" without any warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. The creators make no warranty that the AI model will meet your requirements or be available on an uninterrupted, secure, or error-free basis. Your use of the AI model is at your own risk and discretion, and you will be solely responsible for any damage to computer systems or loss of data that results from the use of the AI model. This disclaimer constitutes part of the agreement between you and the creators of the AI model regarding your use of the model, superseding any prior agreements between you and the creators regarding your use of this AI model.
barchimnases/blockassist
barchimnases
2025-09-15T23:19:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sedate masked spider", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T23:10:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sedate masked spider --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
csikasote/mms-1b-all-bemgen-combined-m50f100-52-DAT-1e-1
csikasote
2025-09-15T23:18:21Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-15T22:47:45Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m50f100-52-DAT-1e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m50f100-52-DAT-1e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.3718 - Cer: 0.1184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 52 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:------:| | 1.0721 | 0.5618 | 100 | 3.0728 | 1.0 | | 0.4695 | 1.1236 | 200 | 1.5347 | 0.5267 | | 0.4001 | 1.6854 | 300 | 0.5215 | 0.1503 | | 0.5636 | 2.2472 | 400 | 0.4122 | 0.1201 | | 0.5876 | 2.8090 | 500 | 0.3884 | 0.1201 | | 0.6226 | 3.3708 | 600 | 0.5549 | 0.1909 | | 0.6024 | 3.9326 | 700 | 0.3816 | 0.1248 | | 0.615 | 4.4944 | 800 | 0.3718 | 0.1184 | | 0.6061 | 5.0562 | 900 | 0.3854 | 0.1359 | | 0.6268 | 5.6180 | 1000 | 0.3922 | 0.1332 | | 0.5987 | 6.1798 | 1100 | 0.3970 | 0.1371 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757978109
svarekagerp
2025-09-15T23:16:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing reptilian bee", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T23:16:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing reptilian bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Bellesteck/Qwen3-30B-A3B-NVFP4-vLLM
Bellesteck
2025-09-15T23:15:10Z
0
0
null
[ "safetensors", "qwen3_moe", "license:apache-2.0", "8-bit", "compressed-tensors", "region:us" ]
null
2025-09-15T23:01:43Z
--- license: apache-2.0 --- I'll let you know once I get it working. Requires the bleeding edge of everything, compiled from source.
KrizTech100/image-captioning-1
KrizTech100
2025-09-15T23:05:14Z
0
0
null
[ "safetensors", "blip", "license:bsd-3-clause", "region:us" ]
null
2025-09-15T22:29:34Z
--- license: bsd-3-clause ---
feelmadrain/whisper-small-ru-cv17
feelmadrain
2025-09-15T23:03:07Z
145
1
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "ru", "dataset:mozilla-foundation/common_voice_17_0", "arxiv:1910.09700", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-10T19:16:35Z
--- library_name: transformers datasets: - mozilla-foundation/common_voice_17_0 language: - ru base_model: - openai/whisper-small pipeline_tag: automatic-speech-recognition --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gartyposen/blockassist
gartyposen
2025-09-15T23:00:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pawing plump seal", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T22:51:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pawing plump seal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VBoussot/Panther
VBoussot
2025-09-15T22:58:51Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-12T18:14:30Z
--- license: apache-2.0 ---
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757976876
svarekagerp
2025-09-15T22:55:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing reptilian bee", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T22:55:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing reptilian bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mbarekmarouene/gemma-smartcompose-lora
mbarekmarouene
2025-09-15T22:52:00Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2025-09-13T22:39:23Z
--- license: apache-2.0 ---
mbarekmarouene/gemma2b_enron_lora
mbarekmarouene
2025-09-15T22:50:45Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-15T22:50:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RandomDud123456/Mini-Project_Models
RandomDud123456
2025-09-15T22:49:04Z
0
0
null
[ "tensorboard", "safetensors", "license:apache-2.0", "region:us" ]
null
2025-09-15T17:55:01Z
--- license: apache-2.0 ---
BootesVoid/cmfiozh96068qx0n0130uxpks_cmflo78ea08fvx0n0hhjqyf0g
BootesVoid
2025-09-15T22:47:32Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-15T22:47:30Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ALEX1 --- # Cmfiozh96068Qx0N0130Uxpks_Cmflo78Ea08Fvx0N0Hhjqyf0G <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ALEX1` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ALEX1", "lora_weights": "https://huggingface.co/BootesVoid/cmfiozh96068qx0n0130uxpks_cmflo78ea08fvx0n0hhjqyf0g/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmfiozh96068qx0n0130uxpks_cmflo78ea08fvx0n0hhjqyf0g', weight_name='lora.safetensors') image = pipeline('ALEX1').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmfiozh96068qx0n0130uxpks_cmflo78ea08fvx0n0hhjqyf0g/discussions) to add images that show off what youโ€™ve made with this LoRA.
csikasote/mms-1b-all-bemgen-combined-m50f100-52-DAT-5e-2
csikasote
2025-09-15T22:47:08Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-15T22:21:58Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m50f100-52-DAT-5e-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m50f100-52-DAT-5e-2 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.6953 - Cer: 0.2270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 52 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.7346 | 0.5618 | 100 | 3.1211 | 1.0 | | 0.3281 | 1.1236 | 200 | 2.1320 | 0.7434 | | 0.389 | 1.6854 | 300 | 0.9736 | 0.2649 | | 0.5674 | 2.2472 | 400 | 0.6953 | 0.2269 | | 0.5956 | 2.8090 | 500 | 0.6513 | 0.2282 | | 0.6069 | 3.3708 | 600 | 0.8638 | 0.2824 | | 0.6197 | 3.9326 | 700 | 0.7599 | 0.2537 | | 0.6104 | 4.4944 | 800 | 0.7466 | 0.2555 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
skyxyz/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-toothy_pale_cockroach
skyxyz
2025-09-15T22:46:45Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am toothy_pale_cockroach", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T22:46:19Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am toothy_pale_cockroach --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
trentmkelly/Llama-3.1-8B-Instruct-reddit-v3
trentmkelly
2025-09-15T22:45:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-15T22:45:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_anneal_condition_split_0_from_61
ChenWu98
2025-09-15T22:45:16Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048", "base_model:finetune:ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048", "endpoints_compatible", "region:us" ]
null
2025-09-15T04:41:50Z
--- base_model: ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048 library_name: transformers model_name: numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_anneal_condition_split_0_from_61 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_anneal_condition_split_0_from_61 This model is a fine-tuned version of [ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048](https://huggingface.co/ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/839xedqt) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nori47/BuXis
nori47
2025-09-15T22:43:21Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2025-09-15T22:43:20Z
--- license: cc-by-nc-4.0 ---
fpadovani/cds_shuffle_1gram_51
fpadovani
2025-09-15T22:42:29Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T21:57:45Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: cds_shuffle_1gram_51 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cds_shuffle_1gram_51 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 51 - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 492 | 3.9822 | | 4.577 | 2.0 | 984 | 3.7390 | | 3.5537 | 3.0 | 1476 | 3.6259 | | 3.3312 | 4.0 | 1968 | 3.5668 | | 3.2063 | 5.0 | 2460 | 3.5435 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.0
DeathGodlike/Pinecone-sage-24b_H8-4.0BPW_EXL3
DeathGodlike
2025-09-15T22:41:50Z
0
0
safetensors
[ "safetensors", "exl3", "4-bit", "text-generation", "base_model:Entropicengine/Pinecone-sage-24b", "base_model:quantized:Entropicengine/Pinecone-sage-24b", "license:apache-2.0", "region:us" ]
text-generation
2025-09-15T22:41:49Z
--- license: apache-2.0 base_model: - Entropicengine/Pinecone-sage-24b base_model_relation: quantized pipeline_tag: text-generation library_name: safetensors tags: - exl3 - 4-bit --- ## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/Pinecone-sage-24b_H8-4.0BPW_EXL3/tree/H8-4.0BPW) ] # Original model: [Pinecone-sage-24b](https://huggingface.co/Entropicengine/Pinecone-sage-24b) by [Entropicengine](https://huggingface.co/Entropicengine)
Regan0323/Llama-3.2-3B-Instruct-full
Regan0323
2025-09-15T22:40:12Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-05T21:32:42Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct library_name: transformers model_name: Llama-3.2-3B-Instruct-full tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for Llama-3.2-3B-Instruct-full This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Regan0323/Llama-3.2-3B-Instruct-full", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
agentlans/granite-3.3-2b-notetaker
agentlans
2025-09-15T22:38:43Z
0
0
null
[ "safetensors", "granite", "notetaking", "academic", "study-skills", "summarization", "en", "dataset:agentlans/note-taking", "base_model:ibm-granite/granite-3.3-2b-instruct", "base_model:finetune:ibm-granite/granite-3.3-2b-instruct", "license:apache-2.0", "region:us" ]
summarization
2025-09-15T22:12:16Z
--- license: apache-2.0 datasets: - agentlans/note-taking language: - en base_model: - ibm-granite/granite-3.3-2b-instruct tags: - notetaking - academic - study-skills - summarization --- # Granite 3.3 2B Notetaker This model generates concise point-form notes in Markdown format based on the provided text. Note that the input and output texts should be properly escaped XML strings. Input format: ```xml Take notes in point form Markdown format: <input>[INPUT TEXT]</input> ``` Output format: ```xml <output>[NOTES IN MARKDOWN FORMAT]</output> ``` ## Example Input: Source: ["Problem solving" article on Wikipedia](https://en.wikipedia.org/wiki/Problem_solving) ```xml Take notes in point form Markdown format: <input>Problem solving is the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks (e.g. how to turn on an appliance) to complex issues in business and technical fields. The former is an example of simple problem solving (SPS) addressing one issue, whereas the latter is complex problem solving (CPS) with multiple interrelated obstacles.[1] Another classification of problem-solving tasks is into well-defined problems with specific obstacles and goals, and ill-defined problems in which the current situation is troublesome but it is not clear what kind of resolution to aim for.[2] Similarly, one may distinguish formal or fact-based problems requiring psychometric intelligence, versus socio-emotional problems which depend on the changeable emotions of individuals or groups, such as tactful behavior, fashion, or gift choices.[3] Solutions require sufficient resources and knowledge to attain the goal. Professionals such as lawyers, doctors, programmers, and consultants are largely problem solvers for issues that require technical skills and knowledge beyond general competence. Many businesses have found profitable markets by recognizing a problem and creating a solution: the more widespread and inconvenient the problem, the greater the opportunity to develop a scalable solution. There are many specialized problem-solving techniques and methods in fields such as science, engineering, business, medicine, mathematics, computer science, philosophy, and social organization. The mental techniques to identify, analyze, and solve problems are studied in psychology and cognitive sciences. Also widely researched are the mental obstacles that prevent people from finding solutions; problem-solving impediments include confirmation bias, mental set, and functional fixedness.</input> ``` Output: ```xml <output># Problem Solving: Key Concepts ## What is Problem Solving? * The process of achieving a goal by overcoming obstacles. * Applied to a wide range of activities, from personal tasks to complex business/technical issues. ## Classification of Problem-Solving Tasks * **Simple Problem Solving (SPS):** Addressing a single, defined issue. * **Complex Problem Solving (CPS):** Dealing with multiple interrelated obstacles. * **Well-Defined Problems:** Clear obstacles and goals. * **Ill-Defined Problems:** Troublesome situation, but unclear resolution. * **Formal/Fact-Based Problems:** Require psychometric intelligence (knowledge-based). * **Socio-Emotional Problems:** Depend on emotions, group dynamics, and individual behavior (e.g., tactfulness, fashion). ## Resources and Knowledge Required * Solutions require sufficient resources and knowledge. ## Problem Solvers &amp; Business Opportunities * Professionals (lawyers, doctors, programmers, consultants) often solve technical problems requiring specialized knowledge. * Identifying and solving problems can create profitable markets. * Larger, more widespread problems offer greater opportunity for scalable solutions. ## Problem-Solving Techniques &amp; Mental Obstacles * Specialized techniques exist in various fields (science, engineering, business, medicine, etc.). * Psychology and cognitive sciences study mental techniques for problem identification and solution. * **Impediments to Problem Solving:** * Confirmation bias * Mental set * Functional fixedness</output> ```
Tumle/MA-Danish-RP-nemo-Mistral-12B-Q6_K-GGUF
Tumle
2025-09-15T22:36:38Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Tumle/MA-Danish-RP-nemo-Mistral-12B", "base_model:quantized:Tumle/MA-Danish-RP-nemo-Mistral-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-15T22:35:57Z
--- base_model: Tumle/MA-Danish-RP-nemo-Mistral-12B tags: - llama-cpp - gguf-my-repo --- # Tumle/MA-Danish-RP-nemo-Mistral-12B-Q6_K-GGUF This model was converted to GGUF format from [`Tumle/MA-Danish-RP-nemo-Mistral-12B`](https://huggingface.co/Tumle/MA-Danish-RP-nemo-Mistral-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Tumle/MA-Danish-RP-nemo-Mistral-12B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Tumle/MA-Danish-RP-nemo-Mistral-12B-Q6_K-GGUF --hf-file ma-danish-rp-nemo-mistral-12b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Tumle/MA-Danish-RP-nemo-Mistral-12B-Q6_K-GGUF --hf-file ma-danish-rp-nemo-mistral-12b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Tumle/MA-Danish-RP-nemo-Mistral-12B-Q6_K-GGUF --hf-file ma-danish-rp-nemo-mistral-12b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Tumle/MA-Danish-RP-nemo-Mistral-12B-Q6_K-GGUF --hf-file ma-danish-rp-nemo-mistral-12b-q6_k.gguf -c 2048 ```
mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF
mradermacher
2025-09-15T22:36:37Z
0
0
transformers
[ "transformers", "vllm", "unsloth", "abliterated", "uncensored", "en", "base_model:huihui-ai/Huihui-gpt-oss-120b-mxfp4-abliterated", "base_model:finetune:huihui-ai/Huihui-gpt-oss-120b-mxfp4-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-15T06:45:40Z
--- base_model: huihui-ai/Huihui-gpt-oss-120b-mxfp4-abliterated language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - vllm - unsloth - abliterated - uncensored --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: MXFP4_MOE x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/huihui-ai/Huihui-gpt-oss-120b-mxfp4-abliterated <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q3_K_S.gguf.part2of2) | Q3_K_S | 66.2 | | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q2_K.gguf.part2of2) | Q2_K | 66.3 | | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.IQ4_XS.gguf.part2of2) | IQ4_XS | 67.1 | | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q3_K_M.gguf.part2of2) | Q3_K_M | 71.2 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q3_K_L.gguf.part2of2) | Q3_K_L | 73.5 | | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q4_K_S.gguf.part2of2) | Q4_K_S | 81.0 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q4_K_M.gguf.part2of2) | Q4_K_M | 88.0 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q5_K_S.gguf.part2of2) | Q5_K_S | 88.1 | | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q5_K_M.gguf.part2of2) | Q5_K_M | 94.0 | | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q6_K.gguf.part3of3) | Q6_K | 124.3 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Huihui-gpt-oss-120b-mxfp4-abliterated-GGUF/resolve/main/Huihui-gpt-oss-120b-mxfp4-abliterated.Q8_0.gguf.part3of3) | Q8_0 | 124.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
glif-loradex-trainer/Weetile_RuiKomatsuzaki
glif-loradex-trainer
2025-09-15T22:36:03Z
0
0
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
2025-09-15T22:35:10Z
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1757975629339__000001500_0.jpg text: wounded centaur, mythical creature Rui Komatsuzaki - output: url: samples/1757975654168__000001500_1.jpg text: ruins of athens, snake Rui Komatsuzaki - output: url: samples/1757975678912__000001500_2.jpg text: silver vampire sword Rui Komatsuzaki base_model: black-forest-labs/FLUX.1-dev trigger: "Rui Komatsuzaki" instance_prompt: "Rui Komatsuzaki" license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # RuiKomatsuzaki Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Weetile`. <Gallery /> ## Trigger words You should use `Rui Komatsuzaki` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/Weetile_RuiKomatsuzaki/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
IoannisKat1/legal-bert-base-uncased-new2
IoannisKat1
2025-09-15T22:32:08Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:1580", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:nlpaueb/legal-bert-base-uncased", "base_model:finetune:nlpaueb/legal-bert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-15T22:31:53Z
--- language: - en license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:1580 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: nlpaueb/legal-bert-base-uncased widget: - source_sentence: What types of data processing allow for derogations from certain rights under Union or Member State law? sentences: - '1.Where personal data relating to a data subject are collected from the data subject, the controller shall, at the time when personal data are obtained, provide the data subject with all of the following information: (a) the identity and the contact details of the controller and, where applicable, of the controller''s representative; (b) the contact details of the data protection officer, where applicable; (c) the purposes of the processing for which the personal data are intended as well as the legal basis for the processing; 4.5.2016 L 119/40 (d) where the processing is based on point (f) of Article 6(1), the legitimate interests pursued by the controller or by a third party; (e) the recipients or categories of recipients of the personal data, if any; (f) where applicable, the fact that the controller intends to transfer personal data to a third country or international organisation and the existence or absence of an adequacy decision by the Commission, or in the case of transfers referred to in Article 46 or 47, or the second subparagraph of Article 49(1), reference to the appropriate or suitable safeguards and the means by which to obtain a copy of them or where they have been made available. 2.In addition to the information referred to in paragraph 1, the controller shall, at the time when personal data are obtained, provide the data subject with the following further information necessary to ensure fair and transparent processing: (a) the period for which the personal data will be stored, or if that is not possible, the criteria used to determine that period; (b) the existence of the right to request from the controller access to and rectification or erasure of personal data or restriction of processing concerning the data subject or to object to processing as well as the right to data portability; (c) where the processing is based on point (a) of Article 6(1) or point (a) of Article 9(2), the existence of the right to withdraw consent at any time, without affecting the lawfulness of processing based on consent before its withdrawal; (d) the right to lodge a complaint with a supervisory authority; (e) whether the provision of personal data is a statutory or contractual requirement, or a requirement necessary to enter into a contract, as well as whether the data subject is obliged to provide the personal data and of the possible consequences of failure to provide such data; (f) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject. 3.Where the controller intends to further process the personal data for a purpose other than that for which the personal data were collected, the controller shall provide the data subject prior to that further processing with information on that other purpose and with any relevant further information as referred to in paragraph 2 4.Paragraphs 1, 2 and 3 shall not apply where and insofar as the data subject already has the information.' - The processing of personal data should also be regarded to be lawful where it is necessary to protect an interest which is essential for the life of the data subject or that of another natural person. Processing of personal data 4.5.2016 L 119/8 Official Journal of the European Union EN - '1.Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, shall be subject to appropriate safeguards, in accordance with this Regulation, for the rights and freedoms of the data subject. Those safeguards shall ensure that technical and organisational measures are in place in particular in 4.5.2016 L 119/84 order to ensure respect for the principle of data minimisation. Those measures may include pseudonymisation provided that those purposes can be fulfilled in that manner. Where those purposes can be fulfilled by further processing which does not permit or no longer permits the identification of data subjects, those purposes shall be fulfilled in that manner. 2.Where personal data are processed for scientific or historical research purposes or statistical purposes, Union or Member State law may provide for derogations from the rights referred to in Articles 15, 16, 18 and 21 subject to the conditions and safeguards referred to in paragraph 1 of this Article in so far as such rights are likely to render impossible or seriously impair the achievement of the specific purposes, and such derogations are necessary for the fulfilment of those purposes. 3.Where personal data are processed for archiving purposes in the public interest, Union or Member State law may provide for derogations from the rights referred to in Articles 15, 16, 18, 19, 20 and 21 subject to the conditions and safeguards referred to in paragraph 1 of this Article in so far as such rights are likely to render impossible or seriously impair the achievement of the specific purposes, and such derogations are necessary for the fulfilment of those purposes. 4.Where processing referred to in paragraphs 2 and 3 serves at the same time another purpose, the derogations shall apply only to processing for the purposes referred to in those paragraphs.' - source_sentence: What is the specific date mentioned in the text? sentences: - '1.Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, shall be subject to appropriate safeguards, in accordance with this Regulation, for the rights and freedoms of the data subject. Those safeguards shall ensure that technical and organisational measures are in place in particular in 4.5.2016 L 119/84 order to ensure respect for the principle of data minimisation. Those measures may include pseudonymisation provided that those purposes can be fulfilled in that manner. Where those purposes can be fulfilled by further processing which does not permit or no longer permits the identification of data subjects, those purposes shall be fulfilled in that manner. 2.Where personal data are processed for scientific or historical research purposes or statistical purposes, Union or Member State law may provide for derogations from the rights referred to in Articles 15, 16, 18 and 21 subject to the conditions and safeguards referred to in paragraph 1 of this Article in so far as such rights are likely to render impossible or seriously impair the achievement of the specific purposes, and such derogations are necessary for the fulfilment of those purposes. 3.Where personal data are processed for archiving purposes in the public interest, Union or Member State law may provide for derogations from the rights referred to in Articles 15, 16, 18, 19, 20 and 21 subject to the conditions and safeguards referred to in paragraph 1 of this Article in so far as such rights are likely to render impossible or seriously impair the achievement of the specific purposes, and such derogations are necessary for the fulfilment of those purposes. 4.Where processing referred to in paragraphs 2 and 3 serves at the same time another purpose, the derogations shall apply only to processing for the purposes referred to in those paragraphs.' - '1) ''personal data'' means any information relating to an identified or identifiable natural person (''data subject''); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person; (2) โ€˜processingโ€™ means any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction; (3) โ€˜restriction of processingโ€™ means the marking of stored personal data with the aim of limiting their processing in the future; (4) โ€˜profilingโ€™ means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person''s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements; (5) โ€˜pseudonymisationโ€™ means the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person; (6) โ€˜filing systemโ€™ means any structured set of personal data which are accessible according to specific criteria, whether centralised, decentralised or dispersed on a functional or geographical basis; (7) โ€˜controllerโ€™ means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law, the controller or the specific criteria for its nomination may be provided for by Union or Member State law; (8) โ€˜processorโ€™ means a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller; (9) โ€˜recipientโ€™ means a natural or legal person, public authority, agency or another body, to which the personal data are disclosed, whether a third party or not. However, public authorities which may receive personal data in the framework of a particular inquiry in accordance with Union or Member State law shall not be regarded as recipients; the processing of those data by those public authorities shall be in compliance with the applicable data protection rules according to the purposes of the processing; (10) โ€˜third partyโ€™ means a natural or legal person, public authority, agency or body other than the data subject, controller, processor and persons who, under the direct authority of the controller or processor, are authorised to process personal data; (11) โ€˜consentโ€™ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject''s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her; (12) โ€˜personal data breachโ€™ means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed; (13) โ€˜genetic dataโ€™ means personal data relating to the inherited or acquired genetic characteristics of a natural person which give unique information about the physiology or the health of that natural person and which result, in particular, from an analysis of a biological sample from the natural person in question; (14) โ€˜biometric dataโ€™ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data; (15) โ€˜data concerning healthโ€™ means personal data related to the physical or mental health of a natural person, including the provision of health care services, which reveal information about his or her health status; (16) โ€˜main establishmentโ€™ means: (a) as regards a controller with establishments in more than one Member State, the place of its central administration in the Union, unless the decisions on the purposes and means of the processing of personal data are taken in another establishment of the controller in the Union and the latter establishment has the power to have such decisions implemented, in which case the establishment having taken such decisions is to be considered to be the main establishment; (b) as regards a processor with establishments in more than one Member State, the place of its central administration in the Union, or, if the processor has no central administration in the Union, the establishment of the processor in the Union where the main processing activities in the context of the activities of an establishment of the processor take place to the extent that the processor is subject to specific obligations under this Regulation; (17) โ€˜representativeโ€™ means a natural or legal person established in the Union who, designated by the controller or processor in writing pursuant to Article 27, represents the controller or processor with regard to their respective obligations under this Regulation; (18) โ€˜enterpriseโ€™ means a natural or legal person engaged in an economic activity, irrespective of its legal form, including partnerships or associations regularly engaged in an economic activity; (19) โ€˜group of undertakingsโ€™ means a controlling undertaking and its controlled undertakings; (20) โ€˜binding corporate rulesโ€™ means personal data protection policies which are adhered to by a controller or processor established on the territory of a Member State for transfers or a set of transfers of personal data to a controller or processor in one or more third countries within a group of undertakings, or group of enterprises engaged in a joint economic activity; (21) โ€˜supervisory authorityโ€™ means an independent public authority which is established by a Member State pursuant to Article 51; (22) โ€˜supervisory authority concernedโ€™ means a supervisory authority which is concerned by the processing of personal data because: (a) the controller or processor is established on the territory of the Member State of that supervisory authority; (b) data subjects residing in the Member State of that supervisory authority are substantially affected or likely to be substantially affected by the processing; or (c) a complaint has been lodged with that supervisory authority; (23) โ€˜cross-border processingโ€™ means either: (a) processing of personal data which takes place in the context of the activities of establishments in more than one Member State of a controller or processor in the Union where the controller or processor is established in more than one Member State; or (b) processing of personal data which takes place in the context of the activities of a single establishment of a controller or processor in the Union but which substantially affects or is likely to substantially affect data subjects in more than one Member State. (24) โ€˜relevant and reasoned objectionโ€™ means an objection to a draft decision as to whether there is an infringement of this Regulation, or whether envisaged action in relation to the controller or processor complies with this Regulation, which clearly demonstrates the significance of the risks posed by the draft decision as regards the fundamental rights and freedoms of data subjects and, where applicable, the free flow of personal data within the Union; (25) โ€˜information society serviceโ€™ means a service as defined in point (b) of Article 1(1) of Directive (EU) 2015/1535 of the European Parliament and of the Council (1); (26) โ€˜international organisationโ€™ means an organisation and its subordinate bodies governed by public international law, or any other body which is set up by, or on the basis of, an agreement between two or more countries.' - '1.A transfer of personal data to a third country or an international organisation may take place where the Commission has decided that the third country, a territory or one or more specified sectors within that third country, or the international organisation in question ensures an adequate level of protection. Such a transfer shall not require any specific authorisation. 2.When assessing the adequacy of the level of protection, the Commission shall, in particular, take account of the following elements: (a) the rule of law, respect for human rights and fundamental freedoms, relevant legislation, both general and sectoral, including concerning public security, defence, national security and criminal law and the access of public authorities to personal data, as well as the implementation of such legislation, data protection rules, professional rules and security measures, including rules for the onward transfer of personal data to another third country or international organisation which are complied with in that country or international organisation, case-law, as well as effective and enforceable data subject rights and effective administrative and judicial redress for the data subjects whose personal data are being transferred; (b) the existence and effective functioning of one or more independent supervisory authorities in the third country or to which an international organisation is subject, with responsibility for ensuring and enforcing compliance with the data protection rules, including adequate enforcement powers, for assisting and advising the data subjects in exercising their rights and for cooperation with the supervisory authorities of the Member States; and (c) the international commitments the third country or international organisation concerned has entered into, or other obligations arising from legally binding conventions or instruments as well as from its participation in multilateral or regional systems, in particular in relation to the protection of personal data. 3.The Commission, after assessing the adequacy of the level of protection, may decide, by means of implementing act, that a third country, a territory or one or more specified sectors within a third country, or an international organisation ensures an adequate level of protection within the meaning of paragraph 2 of this Article. The implementing act shall provide for a mechanism for a periodic review, at least every four years, which shall take into account all relevant developments in the third country or international organisation. The implementing act shall specify its territorial and sectoral application and, where applicable, identify the supervisory authority or authorities referred to in point (b) of paragraph 2 of this Article. The implementing act shall be adopted in accordance with the examination procedure referred to in Article 93(2). 4.The Commission shall, on an ongoing basis, monitor developments in third countries and international organisations that could affect the functioning of decisions adopted pursuant to paragraph 3 of this Article and decisions adopted on the basis of Article 25(6) of Directive 95/46/EC. 5.The Commission shall, where available information reveals, in particular following the review referred to in paragraph 3 of this Article, that a third country, a territory or one or more specified sectors within a third country, or an international organisation no longer ensures an adequate level of protection within the meaning of paragraph 2 of this Article, to the extent necessary, repeal, amend or suspend the decision referred to in paragraph 3 of this Article by means of implementing acts without retro-active effect. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 93(2). On duly justified imperative grounds of urgency, the Commission shall adopt immediately applicable implementing acts in accordance with the procedure referred to in Article 93(3). 6.The Commission shall enter into consultations with the third country or international organisation with a view to remedying the situation giving rise to the decision made pursuant to paragraph 5 7.A decision pursuant to paragraph 5 of this Article is without prejudice to transfers of personal data to the third country, a territory or one or more specified sectors within that third country, or the international organisation in question pursuant to Articles 46 to 49 8.The Commission shall publish in the Official Journal of the European Union and on its website a list of the third countries, territories and specified sectors within a third country and international organisations for which it has decided that an adequate level of protection is or is no longer ensured. 9.Decisions adopted by the Commission on the basis of Article 25(6) of Directive 95/46/EC shall remain in force until amended, replaced or repealed by a Commission Decision adopted in accordance with paragraph 3 or 5 of this Article.' - source_sentence: Which type of requests did the banks accept? sentences: - '**Court (Civil/Criminal): Civil** **Provisions:** **Time of commission of the act:** **Outcome (not guilty, guilty):** **Rationale:** **Facts:** The plaintiff holds credit card number ............ with the defendant banking corporation. Based on the application for alternative networks dated 19/7/2015 with number ......... submitted at a branch of the defendant, he was granted access to the electronic banking service (e-banking) to conduct banking transactions (debit, credit, updates, payments) remotely. On 30/11/2020, the plaintiff fell victim to electronic fraud through the "phishing" method, whereby an unknown perpetrator managed to withdraw a total amount of โ‚ฌ3,121.75 from the aforementioned credit card. Specifically, the plaintiff received an email at 1:35 PM on 29/11/2020 from sender ...... with address ........, informing him that due to an impending system change, he needed to verify the mobile phone number linked to the credit card, urging him to complete the verification process within the next 24 hours by following a link titled ........; otherwise, his account would be locked for security reasons. The plaintiff read this email on the afternoon of 30 November 2020 and, believing it was from the defendant, followed the instructions and proceeded via the provided link to a website that was identical (a clone) to that of the defendant. On this page, he was asked to enter the six-digit security code (.........) that had just been sent to his mobile phone by the defendant at 3:41 PM, with the note that it was an activation code for his ........ card at ........., which he entered. Subsequently, the plaintiff received, according to his statements, a new email (not submitted), which requested him to enter the details of the aforementioned credit card, specifically the name of the cardholder and the card number, not the PIN, which he also entered, convinced that he was within the online environment of the defendant. Then, at 3:47 PM, he received a message on his mobile phone from the defendant containing the exact same content as the one he received at 3:41 PM, while at 3:50 PM he received a message stating that the activation of his ......... card at ....... had been completed. Once the plaintiff read this, he became concerned that something was not right, and immediately called (at 4:41 PM) the defendant''s call center to inform them. There, the employees, with whom he finally connected at 5:04 PM due to high call center volume, advised him to delete the relevant emails, cancel his credit card, change his access passwords for the service, and submit a dispute request regarding the conducted transactions. The plaintiff electronically sent this request to the defendant, disputing the detailed transactions amounting to โ‚ฌ3,121.75, which were conducted on 30/11/2020 during the time frame of 16:37:45-16:43:34 PM, arguing that he had neither performed them himself nor authorized anyone else to do so. The plaintiff specifically disputed the following transactions, as evidenced by the account activity of the disputed credit card during the aforementioned timeframe: a) transaction number ......... amounting to โ‚ฌ150.62 conducted on 30/11/2020 at 4:43:34 PM, b) transaction number ........ amounting to โ‚ฌ293.20 conducted on 30/11/2020 at 4:42:40 PM, c) transaction number ............ amounting to โ‚ฌ295.21 conducted on 30/11/2020 at 4:42:10 PM, d) transaction number .......... amounting to โ‚ฌ299.22 conducted on 30/11/2020 at 4:41:31 PM, e) transaction number ........ amounting to โ‚ฌ297.21 conducted on 30/11/2020 at 4:41:01 PM, f) transaction number ........ amounting to โ‚ฌ299.22 conducted on 30/11/2020 at 4:40:27 PM, g) transaction number ....... amounting to โ‚ฌ299.22 conducted on 30/11/2020 at 4:39:55 PM, h) transaction number ...... amounting to โ‚ฌ299.22 conducted on 30/11/2020 at 4:39:22 PM, i) transaction number ......... amounting to โ‚ฌ297.22 conducted on 30/11/2020 at 4:38:52 PM, j) transaction number ......... amounting to โ‚ฌ295.21 conducted on 30/11/2020 at 4:38:17 PM, and k) transaction number ......... amounting to โ‚ฌ296.21 conducted on 30/11/2020 at 4:37:45 PM. In its response letter dated 21/12/2020, the defendant denied responsibility for the costs of the aforementioned transactions, placing the entire blame on the plaintiff for the leak of his card details and security code to the fraudulent page. The plaintiff, completely denying any fault for the conducted transactions, repeatedly contacted the defendant, both by phone and via email (see emails dated 15/1/2021 and 11/2/2021), while on 2/3/2021, he electronically sent a report dated 1/03/2021 to the Consumer Advocateโ€™s email address, recounting the events and requesting that the aforementioned Independent Authority intervene to have the disputed debt canceled. In its letter with reference number ...../27.04.2021, the aforementioned Independent Authority informed the plaintiff that the case was outside its mediating role and was therefore archived. Subsequently, the plaintiff sent the defendant on 5/3/2021 his extrajudicial statement dated 4/3/2021, calling upon it to fully cancel the debt of โ‚ฌ3,121.75 that had been unjustly incurred against him within two days and to immediately instruct the representatives of the collection agency working with it to cease contacting him regarding the disputed case. The defendant sent the plaintiff a message on his mobile phone on 20/04/2021 informing him that his case was still being processed due to lengthy operational requirements, while on 23/04/2021, via email, it informed him that considering their good cooperation and his efforts to keep them updated, it had reviewed his case and decided to refund him the amounts of the transactions that were conducted after his contact with their representatives on 30/11/2020 at 4:41 PM, totaling โ‚ฌ1,038.25, specifically the following: a) transaction of โ‚ฌ150.62 conducted on 30/11/2020 at 4:43 PM, b) transaction of โ‚ฌ295.21 conducted on 30/11/2020 at 4:42 PM, c) transaction of โ‚ฌ293.20 conducted on 30/11/2020 at 4:42 PM, and d) transaction of โ‚ฌ299.22 conducted on 30/11/2020 at 4:41 PM. Beyond this, the defendant refused to refund the plaintiff the amount of the remaining transactions conducted on 30/11/2020, totaling โ‚ฌ2,376.08 (and not โ‚ฌ2,376.48 as incorrectly stated by the plaintiff in his lawsuit), which the plaintiff ultimately fully paid, transferring โ‚ฌ2,342.77 to the defendant on 7/06/2021 and โ‚ฌ33.31 on 15/06/2021 (see related deposit receipts).' - '1.Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person''s sex life or sexual orientation shall be prohibited. 2.Paragraph 1 shall not apply if one of the following applies: (a) the data subject has given explicit consent to the processing of those personal data for one or more specified purposes, except where Union or Member State law provide that the prohibition referred to in paragraph 1 may not be lifted by the data subject; (b) processing is necessary for the purposes of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment and social security and social protection law in so far as it is authorised by Union or Member State law or a collective agreement pursuant to Member State law providing for appropriate safeguards for the fundamental rights and the interests of the data subject; (c) processing is necessary to protect the vital interests of the data subject or of another natural person where the data subject is physically or legally incapable of giving consent; (d) processing is carried out in the course of its legitimate activities with appropriate safeguards by a foundation, association or any other not-for-profit body with a political, philosophical, religious or trade union aim and on condition that the processing relates solely to the members or to former members of the body or to persons who have regular contact with it in connection with its purposes and that the personal data are not disclosed outside that body without the consent of the data subjects; (e) processing relates to personal data which are manifestly made public by the data subject; (f) processing is necessary for the establishment, exercise or defence of legal claims or whenever courts are acting in their judicial capacity; (g) processing is necessary for reasons of substantial public interest, on the basis of Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject; (h) processing is necessary for the purposes of preventive or occupational medicine, for the assessment of the working capacity of the employee, medical diagnosis, the provision of health or social care or treatment or the management of health or social care systems and services on the basis of Union or Member State law or pursuant to contract with a health professional and subject to the conditions and safeguards referred to in paragraph 3; (i) processing is necessary for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health or ensuring high standards of quality and safety of health care and of medicinal products or medical devices, on the basis of Union or Member State law which provides for suitable and specific measures to safeguard the rights and freedoms of the data subject, in particular professional secrecy; 4.5.2016 L 119/38 (j) processing is necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) based on Union or Member State law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject. 3.Personal data referred to in paragraph 1 may be processed for the purposes referred to in point (h) of paragraph 2 when those data are processed by or under the responsibility of a professional subject to the obligation of professional secrecy under Union or Member State law or rules established by national competent bodies or by another person also subject to an obligation of secrecy under Union or Member State law or rules established by national competent bodies. 4.Member States may maintain or introduce further conditions, including limitations, with regard to the processing of genetic data, biometric data or data concerning health.' - "**Court (Civil/Criminal): Civil** \n**Provisions:** \n**Time of commission\ \ of the act:** \n**Outcome (not guilty, guilty):** \n**Reasoning:** Claim for\ \ compensation and monetary satisfaction due to moral damage against a mobile\ \ phone company and a credit institution within the framework of inadequate fulfillment\ \ of a payment services contract for \"web banking.\" Appropriate actions for\ \ mobile phone companies in case of a request for a \"sim\" card replacement due\ \ to wear or loss. They must verify the customer's identity based on the personal\ \ details and identification information registered in their system but are not\ \ liable for any changes in the latter that were not timely communicated to them.\ \ Further security measures such as phone communication or sending an SMS to the\ \ mobile number holder are not required. Payment services under Law 4357/2018.\ \ Obligation of the payment service provider, such as banks, to inform the payer\ \ after receiving a relevant order for making a payment. The content of this varies\ \ per case, such as sending a personalized code to the user's mobile phone for\ \ transaction approval, as well as sending an email immediately after its completion.\ \ However, the bank is not liable for customer damage resulting from illicit electronic\ \ transactions due to third-party interception of either the access codes for\ \ electronic banking transactions or the sim card and the phone number to which\ \ the personalized codes for approving the aforementioned transactions are sent,\ \ within the framework of increased security protocols. Appropriate actions by\ \ banks upon diagnosing illicit banking transactions that may be fraudulent. Relevant\ \ criteria for consideration. The evidence did not indicate negligent and thus\ \ tortious behavior from all defendants. The claim is dismissed. \n\n**Facts:**\ \ In the present claim, upon due assessment of its content, the plaintiff states\ \ that he has a mobile phone subscription with the first defendant, a mobile phone\ \ company. On October 26, 2020, in the morning, he realized that his mobile phone\ \ was offline, and by noon, he received email notifications from Bank ..........\ \ and ............ (whose third and fourth defendants are de facto universal successors,\ \ respectively), with which he holds an account, regarding transactions he had\ \ made. From phone calls from his home phone to Bank .............. and ............\ \ Bank, he was informed that on the same day, in a very short period, four money\ \ transfers had been made from the account he maintains at Bank ..............,\ \ specifically, an amount of โ‚ฌ15,000 was transferred to the account mentioned\ \ in the claim document under the name ..........., at ........ an amount of โ‚ฌ15,000\ \ was transferred to the account mentioned in the claim document under the name\ \ ......... at ........... Bank, an amount of โ‚ฌ15,000 was transferred to the plaintiff's\ \ account with his daughter as a co-holder at ......... Bank, and an amount of\ \ โ‚ฌ6,700 was transferred from another of his accounts to the account from which\ \ the transfer to the aforementioned accounts of โ‚ฌ45,000 was made. Additionally,\ \ from the plaintiff's account with his daughter as a co-holder at ..........\ \ Bank, an amount of โ‚ฌ9,999 was transferred to an account under the name of ....\ \ . He attempted to log into the online banking service of Bank ......... from\ \ his home computer, but found that the service was locked, while regarding the\ \ corresponding service of ........... Bank, he requested alongside his daughter\ \ to 'lock' it. In a phone call with the call center of Bank ............, he\ \ was informed about the locking of his electronic account in the online banking\ \ service and was told to dispute the transactions, which he did immediately through\ \ ... banking, while his daughter communicated about this with ....... Bank. The\ \ transfer to the account with the beneficiary was canceled, and the amount of\ \ โ‚ฌ15,000 was returned to the plaintiff. After his investigation, he discovered\ \ that an unknown individual appeared at the branch of the first defendant, served\ \ by the second defendant, who posed as the plaintiff and presented a forged military\ \ ID card of the plaintiff, requesting and receiving a new sim card, resulting\ \ in the deactivation of the plaintiff's sim card and gaining access to the codes\ \ sent to him by the banks for completing the transfers. Due to the negligence\ \ of the second defendant, he did not realize that the identity used was forged,\ \ as since 2010, when the plaintiff retired, he has had a police identification\ \ card. The first defendant does not have security protocols to prevent such incidents,\ \ which constitute the sim .... method, despite the issuance of a press release\ \ from the Attica Security and numerous publications regarding the aforementioned\ \ method, unlike other mobile phone companies, which implement a specific procedure\ \ for changing sim cards. The second defendant did not take the obvious step to\ \ check the functioning of the sim card before replacing it, where he would have\ \ realized that the plaintiff's mobile phone was functioning normally. Bank ..........\ \ and ........... Bank: a) accepted requests for transferring large amounts of\ \ money from accounts that had no similar activity in the past, while the plaintiff's\ \ online banking account with the above banks was locked quite some time later,\ \ b) sent email notifications regarding successful transactions in succession,\ \ under a single email, c) did not check the address ... of the perpetrators,\ \ which was different from that used by the plaintiff, and d) did not take necessary\ \ security measures to prevent fraud via sim ... against the plaintiff, as the\ \ security code (pin) sent by the banks via message to the mobile phone proved\ \ to be compromised. As a result of the above illegal and culpable behavior of\ \ the defendants, the plaintiff suffered property damage amounting to a total\ \ of โ‚ฌ24,999, which constitutes the total amount of the transfers made by third\ \ unknown persons to accounts of unknown individuals, as stated above, and has\ \ not been refunded despite his repeated inquiries, while he also suffered distress\ \ and mental anguish, and his trust in the banks was shaken, thus entitling him\ \ to monetary compensation for his moral damage, amounting to โ‚ฌ5,000." - source_sentence: Who can bring proceedings before the courts if a complaint has been rejected? sentences: - '1.Supervisory authorities shall provide each other with relevant information and mutual assistance in order to implement and apply this Regulation in a consistent manner, and shall put in place measures for effective cooperation with one another. Mutual assistance shall cover, in particular, information requests and supervisory measures, such as requests to carry out prior authorisations and consultations, inspections and investigations. 2.Each supervisory authority shall take all appropriate measures required to reply to a request of another supervisory authority without undue delay and no later than one month after receiving the request. Such measures may include, in particular, the transmission of relevant information on the conduct of an investigation. 3.Requests for assistance shall contain all the necessary information, including the purpose of and reasons for the request. Information exchanged shall be used only for the purpose for which it was requested. 4.The requested supervisory authority shall not refuse to comply with the request unless: (a) it is not competent for the subject-matter of the request or for the measures it is requested to execute; or (b) compliance with the request would infringe this Regulation or Union or Member State law to which the supervisory authority receiving the request is subject. 5.The requested supervisory authority shall inform the requesting supervisory authority of the results or, as the case may be, of the progress of the measures taken in order to respond to the request. The requested supervisory authority shall provide reasons for any refusal to comply with a request pursuant to paragraph 4 6.Requested supervisory authorities shall, as a rule, supply the information requested by other supervisory authorities by electronic means, using a standardised format. 7.Requested supervisory authorities shall not charge a fee for any action taken by them pursuant to a request for mutual assistance. Supervisory authorities may agree on rules to indemnify each other for specific expenditure arising from the provision of mutual assistance in exceptional circumstances. 8.Where a supervisory authority does not provide the information referred to in paragraph 5 of this Article within one month of receiving the request of another supervisory authority, the requesting supervisory authority may adopt a provisional measure on the territory of its Member State in accordance with Article 55(1). In that case, the urgent need to act under Article 66(1) shall be presumed to be met and require an urgent binding decision from the Board pursuant to Article 66(2). 9.The Commission may, by means of implementing acts, specify the format and procedures for mutual assistance referred to in this Article and the arrangements for the exchange of information by electronic means between supervisory authorities, and between supervisory authorities and the Board, in particular the standardised format referred to in paragraph 6 of this Article. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 93(2).' - "**Court (Civil/Criminal): Civil** \n**Provisions:** \n**Time of commission\ \ of the act:** \n**Outcome (not guilty, guilty):** \n**Reasoning:** Partially\ \ accepts the lawsuit. \n**Facts:** The plaintiff, who works as a lawyer, maintains\ \ a savings account with the defendant banking corporation under account number\ \ GR.............. Pursuant to a contract dated June 11, 2010, established in\ \ Thessaloniki between the defendant and the plaintiff, the plaintiff was granted\ \ access to the electronic banking system (e-banking) to conduct banking transactions\ \ remotely. On October 10, 2020, the plaintiff fell victim to electronic fraud\ \ through the \"phishing\" method, whereby an unknown perpetrator managed to extract\ \ and transfer โ‚ฌ3,000.00 from the plaintiffโ€™s account to another account of the\ \ same bank. Specifically, on that day at 6:51 a.m., the plaintiff received an\ \ email from the sender \".........\", with the address ..........., informing\ \ him that his debit card had been suspended and that online payments and cash\ \ withdrawals could not be made until the issue was resolved. The email urged\ \ him to confirm his details within the next 72 hours by following a link titled\ \ \"card activation.\" \nThe plaintiff read the above email on his mobile phone\ \ around 8:00 a.m., and believing it came from the defendant, he followed the\ \ instructions and accessed a website that was identical (a clone) to that of\ \ the defendant. On this page, he was asked to enter his login credentials to\ \ connect to the service, which he did, and he was subsequently asked to input\ \ his debit card details for the alleged activation, which he also provided. Then,\ \ to complete the process, a number was sent to his mobile phone at 8:07 a.m.\ \ from the sender ........, which he entered, and two minutes later he received\ \ a message from the same sender in English stating that the quick access code\ \ had been activated on his mobile. A few minutes later, at 8:18 a.m., he received\ \ an email from the defendant informing him of the transfer of โ‚ฌ3,000.00 from\ \ his account to account number GR ........... held at the same bank, with the\ \ beneficiary's details being .......... As soon as the plaintiff read this, he\ \ immediately called the defendant's call center and canceled his debit card,\ \ the access codes for the service ......., and locked the application ..........\ \ At the same time, he verbally submitted a request to dispute and cancel the\ \ contested transaction, and in a subsequent phone call, he also canceled his\ \ credit card. On the same day, he also sent an email to the defendant informing\ \ them in writing of the above and requesting the cancellation of the transaction\ \ and the return of the amount of โ‚ฌ3,000.00 to his account, as this transfer was\ \ not made by him but by an unknown perpetrator through electronic fraud and was\ \ not approved by him. It should also be noted that the plaintiff, as the sole\ \ beneficiary according to the aforementioned contract for using the defendant's\ \ Internet Banking service, never received any update via SMS or the VIBER application\ \ from the bank regarding the transaction details before its completion, nor did\ \ he receive a one-time code (OTP) to approve the contested transaction. He subsequently\ \ filed a complaint against unknown persons at the Cyber Crime Division for the\ \ crime of fraud. The defendant sent an email to the plaintiff on October 16,\ \ 2020, informing him that his request had been forwarded to the appropriate department\ \ of the bank for investigation, stating that the bank would never send him an\ \ email or SMS asking him to enter his personal data and that as of October 7,\ \ 2020, there was a notice posted for its customers regarding malicious attempts\ \ to steal personal data in the \"Our News\" section on ....... A month after\ \ the disputed incident, on November 10, 2020, an amount of โ‚ฌ2,296.82 was transferred\ \ to the plaintiff's account from the account to which the fraudulent credit had\ \ been made. The plaintiff immediately sent an email to the defendant asking to\ \ be informed whether this transfer was a return of part of the amount that had\ \ been illegally withdrawn from his account and requested the return of the remaining\ \ amount of โ‚ฌ703.18. In its response dated January 13, 2021, the defendant confirmed\ \ that the aforementioned amount indeed came from the account to which the fraudulent\ \ credit had been made, following a freeze of that account initiated by the defendant\ \ during the investigation of the incident, but refused to return the remaining\ \ amount, claiming it bore no responsibility for the leak of the personal codes\ \ to third parties, according to the terms of the service contract established\ \ between them. \nFrom the entirety of the evidence presented to the court, there\ \ is no indication of the authenticity of the contested transaction, as the plaintiff\ \ did not give his consent for the execution of the transfer of the amount of\ \ โ‚ฌ3,000.00, especially in light of the provision in Article 72 paragraph 2 of\ \ Law 4537/2018 stating that the mere use of the Internet Banking service by the\ \ plaintiff does not necessarily constitute sufficient evidence that the payer\ \ approved the payment action. Specifically, it was proven that the contested\ \ transaction was not carried out following a strong identification of the plaintiff\ \ โ€“ the sole beneficiary of the account โ€“ and his approval, as the latter may\ \ have entered his personal codes on the counterfeit website; however, he was\ \ never informed, before the completion of the contested transaction, of the amount\ \ that would be transferred from his account to a third-party account, nor did\ \ he receive on his mobile phone, either via SMS or through the VIBER application\ \ or any other means, the one-time code - extra PIN for its completion, which\ \ he was required to enter to approve the contested transaction (payment action)\ \ and thus complete his identification, a fact that was not countered by any evidence\ \ from the defendant. Furthermore, it is noted that the defendant's claims that\ \ it bears no responsibility under the terms of the banking services contract,\ \ whereby it is not liable for any damage to its customer in cases of unauthorized\ \ use of their personal access codes to the Internet Banking service, are to be\ \ rejected as fundamentally unfounded. This is because the aforementioned contractual\ \ terms are invalid according to the provision of Article 103 of Law 4537/2018,\ \ as they contradict the provisions of Articles 71, 73, and 92 of the same Law,\ \ which provide for the provider's universal liability and its exemption only\ \ for unusual and unforeseen circumstances that are beyond the control of the\ \ party invoking them and whose consequences could not have been avoided despite\ \ all efforts to the contrary; these provisions establish mandatory law in favor\ \ of users, as according to Article 103 of Law 4537/2018, payment service providers\ \ are prohibited from deviating from the provisions to the detriment of payment\ \ service users, unless the possibility of deviation is explicitly provided and\ \ they can decide to offer only more favorable terms to payment service users;\ \ the aforementioned contractual terms do not constitute more favorable terms\ \ but rather disadvantageous terms for the payment service user. In this case,\ \ however, the defendant did not prove the authenticity of the transaction and\ \ its approval by the plaintiff and did not invoke, nor did any unusual and unforeseen\ \ circumstances beyond its control, the consequences of which could not have been\ \ avoided despite all efforts to the contrary, come to light. Therefore, the contested\ \ transaction transferring the amount of โ‚ฌ3,000.00 is considered, in the absence\ \ of demonstrable consent from the plaintiff, unapproved according to the provisions\ \ of Article 64 of Law 4537/2018, and the defendant's contrary claims are rejected,\ \ especially since the plaintiff proceeded, according to Article 71 paragraph\ \ 1 of Law 4537/2018, without undue delay to notify the defendant regarding the\ \ contested unapproved payment action. Consequently, the defendant is liable for\ \ compensating the plaintiff for the positive damage he suffered under Article\ \ 73 of Law 4537/2018 and is obliged to pay him the requested amount of โ‚ฌ703.18,\ \ while the plaintiffโ€™s fault in the occurrence of this damage cannot be established,\ \ as he entered his personal details in an online environment that was a faithful\ \ imitation of that of the defendant, as evidenced by the comparison of the screenshots\ \ of the fake website and the real website provided by the plaintiff, a fact that\ \ he could not have known while being fully convinced that he was transacting\ \ with the defendant. Furthermore, the defendantโ€™s liability to compensate the\ \ plaintiff is based on the provision of Article 8 of Law 2251/1994, which applies\ \ in this case, as the plaintiff's damage resulted from inadequate fulfillment\ \ of its obligations in the context of providing its services, but also on the\ \ provision of Article 914 of the Civil Code in the sense of omission on its part\ \ of unlawfully and culpably imposed actions. In this case, given that during\ \ the relevant period there had been a multitude of similar incidents of fraud\ \ against the defendant's customers, the latter, as a service provider to the\ \ consumer public and bearing transactional obligations of care and security towards\ \ them, displayed gross negligence regarding the security provided for electronic\ \ transaction services, which was compromised by the fraudulent theft of funds,\ \ as it did not comply with all required high-security measures for executing\ \ the contested transaction, failing to implement the strict customer identification\ \ verification process and to check the authenticity of the account to which the\ \ funds were sent, thus not assuming the suspicious nature of the transaction,\ \ did not adopt comprehensive and improved protective measures to fully protect\ \ its customers against malicious attacks and online fraud and to prevent the\ \ infiltration of unauthorized third parties, nor did it fulfill its obligations\ \ to inform, accurately inform, and warn its consumers - customers, as it failed\ \ to adequately inform them of attempts to steal their personal data through the\ \ sending of informative emails or SMS, while merely posting in a section rather\ \ than on a central banner (as it later did) does not constitute adequate information\ \ such that it meets the requirement of protecting its customers and the increased\ \ safeguarding of their interests. Although the plaintiff acted promptly and informed\ \ the defendant on the same day about the contested incident, the defendant did\ \ not act as promptly regarding the investigation of the incident and the freezing\ \ of the account that held the fraudulent credit to prevent the plaintiff's loss,\ \ but only returned part of the funds to the plaintiff a month later. This behavior,\ \ beyond being culpable due to gross negligence, was also unlawful, as it would\ \ have been illegal even without the contractual relationship, as contrary to\ \ the provisions of Law 4537/2018 and Law 2251/1994, regarding the lack of security\ \ of the services that the consumer is legitimately entitled to expect, as well\ \ as the building of trust that is essential in banking transactions, elements\ \ that it was obligated to provide within the sphere of the services offered,\ \ and contrary to the principles of good faith and commercial ethics, as crystallized\ \ in the provision of Article 288 of the Civil Code, as well as the general duty\ \ imposed by Article 914 of the Civil Code not to cause harm to another culpably.\ \ This resulted not only in positive damage to the plaintiff but also in causing\ \ him moral harm consisting of his mental distress and the disruption, agitation,\ \ and sorrow he experienced, for which he must be awarded financial compensation.\ \ Taking into account all the general circumstances of the case, the extent of\ \ the plaintiff's damage, the severity of the defendant's fault, the mental distress\ \ suffered by the plaintiff, the insecurity he felt regarding his deposits, the\ \ sorrow he experienced, and the stress caused by his financial loss, which occurred\ \ during the pandemic period when his earnings from his professional activity\ \ had significantly decreased, as well as the financial and social situation of\ \ the parties, it is the court's opinion that he should be granted, as financial\ \ compensation for his moral harm, an amount of โ‚ฌ250.00, which is deemed reasonable\ \ and fair. Therefore, the total monetary amount that the plaintiff is entitled\ \ to for his positive damage and financial compensation for the moral harm suffered\ \ amounts to a total of (โ‚ฌ703.18 + โ‚ฌ250.00) = โ‚ฌ953.18." - Any natural or legal person has the right to bring an action for annulment of decisions of the Board before the Court of Justice under the conditions provided for in Article 263 TFEU. As addressees of such decisions, the supervisory authorities concerned which wish to challenge them have to bring action within two months of being notified of them, in accordance with Article 263 TFEU. Where decisions of the Board are of direct and individual concern to a controller, processor or complainant, the latter may bring an action for annulment against those decisions within two months of their publication on the website of the Board, in accordance with Article 263 TFEU. Without prejudice to this right under Article 263 TFEU, each natural or legal person should have an effective judicial remedy before the competent national court against a decision of a supervisory authority which produces legal effects concerning that person. Such a decision concerns in particular the exercise of investigative, corrective and authorisation powers by the supervisory authority or the dismissal or rejection of complaints. However, the right to an effective judicial remedy does not encompass measures taken by supervisory authorities which are not legally binding, such as opinions issued by or advice provided by the supervisory authority. Proceedings against a supervisory authority should be brought before the courts of the Member State where the supervisory authority is established and should be conducted in accordance with that Member State's procedural law. Those courts should exercise full jurisdiction, which should include jurisdiction to examine all questions of fact and law relevant to the dispute before them. Where a complaint has been rejected or dismissed by a supervisory authority, the complainant may bring proceedings before the courts in the same Member State. In the context of judicial remedies relating to the application of this Regulation, national courts which consider a decision on the question necessary to enable them to give judgment, may, or in the case provided for in Article 267 TFEU, must, request the Court of Justice to give a preliminary ruling on the interpretation of Union law, including this Regulation. Furthermore, where a decision of a supervisory authority implementing a decision of the Board is challenged before a national court and the validity of the decision of the Board is at issue, that national court does not have the power to declare the Board's decision invalid but must refer the question of validity to the Court of Justice in accordance with Article 267 TFEU as interpreted by the Court of Justice, where it considers the decision invalid. However, a national court may not refer a question on the validity of the decision of the Board at the request of a natural or legal person which had the opportunity to bring an action for annulment of that decision, in particular if it was directly and individually concerned by that decision, but had not done so within the period laid down in Article 263 TFEU. - source_sentence: What are the defendant's claims described as? sentences: - '1.Without prejudice to other tasks set out under this Regulation, each supervisory authority shall on its territory: (a) monitor and enforce the application of this Regulation; (b) promote public awareness and understanding of the risks, rules, safeguards and rights in relation to processing. Activities addressed specifically to children shall receive specific attention; (c) advise, in accordance with Member State law, the national parliament, the government, and other institutions and bodies on legislative and administrative measures relating to the protection of natural persons'' rights and freedoms with regard to processing; (d) promote the awareness of controllers and processors of their obligations under this Regulation; (e) upon request, provide information to any data subject concerning the exercise of their rights under this Regulation and, if appropriate, cooperate with the supervisory authorities in other Member States to that end; (f) handle complaints lodged by a data subject, or by a body, organisation or association in accordance with Article 80, and investigate, to the extent appropriate, the subject matter of the complaint and inform the complainant of the progress and the outcome of the investigation within a reasonable period, in particular if further investigation or coordination with another supervisory authority is necessary; (g) cooperate with, including sharing information and provide mutual assistance to, other supervisory authorities with a view to ensuring the consistency of application and enforcement of this Regulation; (h) conduct investigations on the application of this Regulation, including on the basis of information received from another supervisory authority or other public authority; (i) monitor relevant developments, insofar as they have an impact on the protection of personal data, in particular the development of information and communication technologies and commercial practices; (j) adopt standard contractual clauses referred to in Article 28(8) and in point (d) of Article 46(2); (k) establish and maintain a list in relation to the requirement for data protection impact assessment pursuant to Article 35(4); (l) give advice on the processing operations referred to in Article 36(2); (m) encourage the drawing up of codes of conduct pursuant to Article 40(1) and provide an opinion and approve such codes of conduct which provide sufficient safeguards, pursuant to Article 40(5); (n) encourage the establishment of data protection certification mechanisms and of data protection seals and marks pursuant to Article 42(1), and approve the criteria of certification pursuant to Article 42(5); (o) where applicable, carry out a periodic review of certifications issued in accordance with Article 42(7); 4.5.2016 L 119/68 (p) draft and publish the criteria for accreditation of a body for monitoring codes of conduct pursuant to Article 41 and of a certification body pursuant to Article 43; (q) conduct the accreditation of a body for monitoring codes of conduct pursuant to Article 41 and of a certification body pursuant to Article 43; (r) authorise contractual clauses and provisions referred to in Article 46(3); (s) approve binding corporate rules pursuant to Article 47; (t) contribute to the activities of the Board; (u) keep internal records of infringements of this Regulation and of measures taken in accordance with Article 58(2); and (v) fulfil any other tasks related to the protection of personal data. 2.Each supervisory authority shall facilitate the submission of complaints referred to in point (f) of paragraph 1 by measures such as a complaint submission form which can also be completed electronically, without excluding other means of communication. 3.The performance of the tasks of each supervisory authority shall be free of charge for the data subject and, where applicable, for the data protection officer. 4.Where requests are manifestly unfounded or excessive, in particular because of their repetitive character, the supervisory authority may charge a reasonable fee based on administrative costs, or refuse to act on the request. The supervisory authority shall bear the burden of demonstrating the manifestly unfounded or excessive character of the request.' - "**Court (Civil/Criminal): Civil** \n**Provisions:** \n**Time of commission\ \ of the act:** \n**Outcome (not guilty, guilty):** \n**Reasoning:** Partially\ \ accepts the lawsuit. \n**Facts:** The plaintiff, who works as a lawyer, maintains\ \ a savings account with the defendant banking corporation under account number\ \ GR.............. Pursuant to a contract dated June 11, 2010, established in\ \ Thessaloniki between the defendant and the plaintiff, the plaintiff was granted\ \ access to the electronic banking system (e-banking) to conduct banking transactions\ \ remotely. On October 10, 2020, the plaintiff fell victim to electronic fraud\ \ through the \"phishing\" method, whereby an unknown perpetrator managed to extract\ \ and transfer โ‚ฌ3,000.00 from the plaintiffโ€™s account to another account of the\ \ same bank. Specifically, on that day at 6:51 a.m., the plaintiff received an\ \ email from the sender \".........\", with the address ..........., informing\ \ him that his debit card had been suspended and that online payments and cash\ \ withdrawals could not be made until the issue was resolved. The email urged\ \ him to confirm his details within the next 72 hours by following a link titled\ \ \"card activation.\" \nThe plaintiff read the above email on his mobile phone\ \ around 8:00 a.m., and believing it came from the defendant, he followed the\ \ instructions and accessed a website that was identical (a clone) to that of\ \ the defendant. On this page, he was asked to enter his login credentials to\ \ connect to the service, which he did, and he was subsequently asked to input\ \ his debit card details for the alleged activation, which he also provided. Then,\ \ to complete the process, a number was sent to his mobile phone at 8:07 a.m.\ \ from the sender ........, which he entered, and two minutes later he received\ \ a message from the same sender in English stating that the quick access code\ \ had been activated on his mobile. A few minutes later, at 8:18 a.m., he received\ \ an email from the defendant informing him of the transfer of โ‚ฌ3,000.00 from\ \ his account to account number GR ........... held at the same bank, with the\ \ beneficiary's details being .......... As soon as the plaintiff read this, he\ \ immediately called the defendant's call center and canceled his debit card,\ \ the access codes for the service ......., and locked the application ..........\ \ At the same time, he verbally submitted a request to dispute and cancel the\ \ contested transaction, and in a subsequent phone call, he also canceled his\ \ credit card. On the same day, he also sent an email to the defendant informing\ \ them in writing of the above and requesting the cancellation of the transaction\ \ and the return of the amount of โ‚ฌ3,000.00 to his account, as this transfer was\ \ not made by him but by an unknown perpetrator through electronic fraud and was\ \ not approved by him. It should also be noted that the plaintiff, as the sole\ \ beneficiary according to the aforementioned contract for using the defendant's\ \ Internet Banking service, never received any update via SMS or the VIBER application\ \ from the bank regarding the transaction details before its completion, nor did\ \ he receive a one-time code (OTP) to approve the contested transaction. He subsequently\ \ filed a complaint against unknown persons at the Cyber Crime Division for the\ \ crime of fraud. The defendant sent an email to the plaintiff on October 16,\ \ 2020, informing him that his request had been forwarded to the appropriate department\ \ of the bank for investigation, stating that the bank would never send him an\ \ email or SMS asking him to enter his personal data and that as of October 7,\ \ 2020, there was a notice posted for its customers regarding malicious attempts\ \ to steal personal data in the \"Our News\" section on ....... A month after\ \ the disputed incident, on November 10, 2020, an amount of โ‚ฌ2,296.82 was transferred\ \ to the plaintiff's account from the account to which the fraudulent credit had\ \ been made. The plaintiff immediately sent an email to the defendant asking to\ \ be informed whether this transfer was a return of part of the amount that had\ \ been illegally withdrawn from his account and requested the return of the remaining\ \ amount of โ‚ฌ703.18. In its response dated January 13, 2021, the defendant confirmed\ \ that the aforementioned amount indeed came from the account to which the fraudulent\ \ credit had been made, following a freeze of that account initiated by the defendant\ \ during the investigation of the incident, but refused to return the remaining\ \ amount, claiming it bore no responsibility for the leak of the personal codes\ \ to third parties, according to the terms of the service contract established\ \ between them. \nFrom the entirety of the evidence presented to the court, there\ \ is no indication of the authenticity of the contested transaction, as the plaintiff\ \ did not give his consent for the execution of the transfer of the amount of\ \ โ‚ฌ3,000.00, especially in light of the provision in Article 72 paragraph 2 of\ \ Law 4537/2018 stating that the mere use of the Internet Banking service by the\ \ plaintiff does not necessarily constitute sufficient evidence that the payer\ \ approved the payment action. Specifically, it was proven that the contested\ \ transaction was not carried out following a strong identification of the plaintiff\ \ โ€“ the sole beneficiary of the account โ€“ and his approval, as the latter may\ \ have entered his personal codes on the counterfeit website; however, he was\ \ never informed, before the completion of the contested transaction, of the amount\ \ that would be transferred from his account to a third-party account, nor did\ \ he receive on his mobile phone, either via SMS or through the VIBER application\ \ or any other means, the one-time code - extra PIN for its completion, which\ \ he was required to enter to approve the contested transaction (payment action)\ \ and thus complete his identification, a fact that was not countered by any evidence\ \ from the defendant. Furthermore, it is noted that the defendant's claims that\ \ it bears no responsibility under the terms of the banking services contract,\ \ whereby it is not liable for any damage to its customer in cases of unauthorized\ \ use of their personal access codes to the Internet Banking service, are to be\ \ rejected as fundamentally unfounded. This is because the aforementioned contractual\ \ terms are invalid according to the provision of Article 103 of Law 4537/2018,\ \ as they contradict the provisions of Articles 71, 73, and 92 of the same Law,\ \ which provide for the provider's universal liability and its exemption only\ \ for unusual and unforeseen circumstances that are beyond the control of the\ \ party invoking them and whose consequences could not have been avoided despite\ \ all efforts to the contrary; these provisions establish mandatory law in favor\ \ of users, as according to Article 103 of Law 4537/2018, payment service providers\ \ are prohibited from deviating from the provisions to the detriment of payment\ \ service users, unless the possibility of deviation is explicitly provided and\ \ they can decide to offer only more favorable terms to payment service users;\ \ the aforementioned contractual terms do not constitute more favorable terms\ \ but rather disadvantageous terms for the payment service user. In this case,\ \ however, the defendant did not prove the authenticity of the transaction and\ \ its approval by the plaintiff and did not invoke, nor did any unusual and unforeseen\ \ circumstances beyond its control, the consequences of which could not have been\ \ avoided despite all efforts to the contrary, come to light. Therefore, the contested\ \ transaction transferring the amount of โ‚ฌ3,000.00 is considered, in the absence\ \ of demonstrable consent from the plaintiff, unapproved according to the provisions\ \ of Article 64 of Law 4537/2018, and the defendant's contrary claims are rejected,\ \ especially since the plaintiff proceeded, according to Article 71 paragraph\ \ 1 of Law 4537/2018, without undue delay to notify the defendant regarding the\ \ contested unapproved payment action. Consequently, the defendant is liable for\ \ compensating the plaintiff for the positive damage he suffered under Article\ \ 73 of Law 4537/2018 and is obliged to pay him the requested amount of โ‚ฌ703.18,\ \ while the plaintiffโ€™s fault in the occurrence of this damage cannot be established,\ \ as he entered his personal details in an online environment that was a faithful\ \ imitation of that of the defendant, as evidenced by the comparison of the screenshots\ \ of the fake website and the real website provided by the plaintiff, a fact that\ \ he could not have known while being fully convinced that he was transacting\ \ with the defendant. Furthermore, the defendantโ€™s liability to compensate the\ \ plaintiff is based on the provision of Article 8 of Law 2251/1994, which applies\ \ in this case, as the plaintiff's damage resulted from inadequate fulfillment\ \ of its obligations in the context of providing its services, but also on the\ \ provision of Article 914 of the Civil Code in the sense of omission on its part\ \ of unlawfully and culpably imposed actions. In this case, given that during\ \ the relevant period there had been a multitude of similar incidents of fraud\ \ against the defendant's customers, the latter, as a service provider to the\ \ consumer public and bearing transactional obligations of care and security towards\ \ them, displayed gross negligence regarding the security provided for electronic\ \ transaction services, which was compromised by the fraudulent theft of funds,\ \ as it did not comply with all required high-security measures for executing\ \ the contested transaction, failing to implement the strict customer identification\ \ verification process and to check the authenticity of the account to which the\ \ funds were sent, thus not assuming the suspicious nature of the transaction,\ \ did not adopt comprehensive and improved protective measures to fully protect\ \ its customers against malicious attacks and online fraud and to prevent the\ \ infiltration of unauthorized third parties, nor did it fulfill its obligations\ \ to inform, accurately inform, and warn its consumers - customers, as it failed\ \ to adequately inform them of attempts to steal their personal data through the\ \ sending of informative emails or SMS, while merely posting in a section rather\ \ than on a central banner (as it later did) does not constitute adequate information\ \ such that it meets the requirement of protecting its customers and the increased\ \ safeguarding of their interests. Although the plaintiff acted promptly and informed\ \ the defendant on the same day about the contested incident, the defendant did\ \ not act as promptly regarding the investigation of the incident and the freezing\ \ of the account that held the fraudulent credit to prevent the plaintiff's loss,\ \ but only returned part of the funds to the plaintiff a month later. This behavior,\ \ beyond being culpable due to gross negligence, was also unlawful, as it would\ \ have been illegal even without the contractual relationship, as contrary to\ \ the provisions of Law 4537/2018 and Law 2251/1994, regarding the lack of security\ \ of the services that the consumer is legitimately entitled to expect, as well\ \ as the building of trust that is essential in banking transactions, elements\ \ that it was obligated to provide within the sphere of the services offered,\ \ and contrary to the principles of good faith and commercial ethics, as crystallized\ \ in the provision of Article 288 of the Civil Code, as well as the general duty\ \ imposed by Article 914 of the Civil Code not to cause harm to another culpably.\ \ This resulted not only in positive damage to the plaintiff but also in causing\ \ him moral harm consisting of his mental distress and the disruption, agitation,\ \ and sorrow he experienced, for which he must be awarded financial compensation.\ \ Taking into account all the general circumstances of the case, the extent of\ \ the plaintiff's damage, the severity of the defendant's fault, the mental distress\ \ suffered by the plaintiff, the insecurity he felt regarding his deposits, the\ \ sorrow he experienced, and the stress caused by his financial loss, which occurred\ \ during the pandemic period when his earnings from his professional activity\ \ had significantly decreased, as well as the financial and social situation of\ \ the parties, it is the court's opinion that he should be granted, as financial\ \ compensation for his moral harm, an amount of โ‚ฌ250.00, which is deemed reasonable\ \ and fair. Therefore, the total monetary amount that the plaintiff is entitled\ \ to for his positive damage and financial compensation for the moral harm suffered\ \ amounts to a total of (โ‚ฌ703.18 + โ‚ฌ250.00) = โ‚ฌ953.18." - "**Court (Civil/Criminal):**\nProvisions: Articles 8 of Law 2251/1994, Articles\ \ 2, 4, 48 et seq. of Law 4537/2018, Article 11 paragraph 1 of Law 4261/2014,\ \ Articles 830, 806, 827, 914, 932 of the Civil Code and 176 of the Code of Civil\ \ Procedure.\nTime of commission of the act:\nOutcome (not guilty, guilty):\n\ Rationale: Electronic fraud through the method of phishing. A third party fraudulently\ \ obtained money from the plaintiff's bank account and transferred it to another\ \ bank account. Both the defendant is liable for the inadequate protection of\ \ its systems, which should have been excellent, and the plaintiff who failed\ \ to fulfill his obligation to protect his information and disregarded the defendant's\ \ security instructions. Law 4537/2018 introduces mandatory law in favor of users,\ \ as according to Article 103, payment service providers are prohibited from deviating\ \ from the provisions to the detriment of payment service users. It is determined\ \ that a resumption of the discussion should be ordered in order to provide all\ \ possible evidence, with diligence from both parties, especially from the defendant,\ \ who has access to the transaction data through its systems, but also bears the\ \ relevant burden of proof concerning the exact timing of the execution of the\ \ money transfer order at each stage (withdrawal from the plaintiff's account,\ \ transfer to another bank, transfer to the third party's account).\nFacts: The\ \ plaintiff maintains a joint bank account with his wife at the defendant bank\ \ and has also agreed to online banking transactions (e-banking). On July 31,\ \ 2020, at 13:45, the plaintiff was informed of a transfer of โ‚ฌ3,000 from his\ \ account, which he had not initiated, nor had his wife. At 14:05, he immediately\ \ contacted the bankโ€™s customer service line and reported the incident, stating\ \ that it was not his action and requesting its cancellation. The bank employee\ \ found that the plaintiff had provided his details to a fake website 10 days\ \ earlier, and subsequently, the mobile number used for transaction confirmations\ \ had been changed. The employee informed him that the money was at the other\ \ bank and that they would logically be able to retrieve it, provided it had not\ \ already been transferred to a third party's account. Since then, the plaintiff\ \ has not seen any return of the amount to his account, and he has made numerous\ \ attempts to resolve the issue with the bank, with effort, costs, and distress;\ \ however, nothing was achieved, as the money had already entered a third party's\ \ account and the defendant denied responsibility for the transfer of the funds.\n\ Facts: The plaintiff maintained a joint account with his wife at a bank and used\ \ internet banking services. On July 21, 2020, a third party deceived the plaintiff\ \ through phishing (a misleading SMS with a link), obtaining his banking credentials.\ \ The third party, using the stolen information, requested a phone number change\ \ for receiving OTP (one-time password) and completing electronic transactions.\ \ The bank completed the change process based on the correct credentials. On July\ \ 31, 2020, a transfer of โ‚ฌ3,000 was made from the plaintiff's account to a third\ \ party. The plaintiff was immediately informed, called the bank, and reported\ \ the fraud; however, the recovery of the funds was not successful. The plaintiff\ \ claims that the bank is responsible for inadequate protection of its systems,\ \ while the bank asserts that it followed the procedure based on the agreed identification\ \ methods. \nThe court recognizes that there is responsibility on both sides:\ \ the bank for inadequate security and prevention of phishing, and the plaintiff\ \ for negligence in safeguarding his personal information, despite the bank's\ \ relevant warnings. A critical issue is the exact timing of the completion of\ \ the transfer: if the bank was timely notified of the fraud but did not intervene,\ \ it may be fully liable. The court requests a resumption of the discussion and\ \ further evidence, mainly from the bank, which has access to the relevant technical\ \ details." pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: legal-bert-base-uncased results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.3560606060606061 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.3813131313131313 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.43434343434343436 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.49747474747474746 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.3560606060606061 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3409090909090909 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.31565656565656564 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.27222222222222225 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.08338845128654723 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.21218012780623383 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.28051933181101324 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.39632435240497227 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.42014150424266994 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.38587461920795246 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.46701192478698805 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.351010101010101 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.3888888888888889 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.4393939393939394 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.49242424242424243 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.351010101010101 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.34006734006734 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.31616161616161614 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.27474747474747474 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.0811845209576169 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.21096571721682325 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.2782625608042422 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.3889942981999181 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.41774434112925773 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.38250360750360746 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.46582524009522214 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.3434343434343434 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.3813131313131313 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.4065656565656566 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.4823232323232323 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.3434343434343434 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3341750841750841 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.3106060606060606 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.2691919191919192 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.07709196999006594 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.19818817839761774 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.2633729750813232 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.3825298521521387 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.40624359869930343 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3728455186788519 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.45498240804974704 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.3434343434343434 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.3686868686868687 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.39646464646464646 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.45707070707070707 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.3434343434343434 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3274410774410774 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.302020202020202 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.2598484848484849 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.07661351326160921 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.1918760158828037 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.25215259008056984 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.3587527703094874 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.39179726664623976 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3657567740901074 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.4478819490198904 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.2777777777777778 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.29797979797979796 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.33585858585858586 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.39141414141414144 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.2777777777777778 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.265993265993266 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.248989898989899 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.21212121212121213 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.061055029539267065 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.1586017468669595 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.22049184152294302 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.3034883649569159 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.3248314052080558 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.3000260541927209 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.38153227130259265 name: Cosine Map@100 --- # legal-bert-base-uncased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) <!-- at revision 15b570cbf88259610b082a167dacc190124f60f6 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ "What are the defendant's claims described as?", '**Court (Civil/Criminal): Civil** \n**Provisions:** \n**Time of commission of the act:** \n**Outcome (not guilty, guilty):** \n**Reasoning:** Partially accepts the lawsuit. \n**Facts:** The plaintiff, who works as a lawyer, maintains a savings account with the defendant banking corporation under account number GR.............. Pursuant to a contract dated June 11, 2010, established in Thessaloniki between the defendant and the plaintiff, the plaintiff was granted access to the electronic banking system (e-banking) to conduct banking transactions remotely. On October 10, 2020, the plaintiff fell victim to electronic fraud through the "phishing" method, whereby an unknown perpetrator managed to extract and transfer โ‚ฌ3,000.00 from the plaintiffโ€™s account to another account of the same bank. Specifically, on that day at 6:51 a.m., the plaintiff received an email from the sender ".........", with the address ..........., informing him that his debit card had been suspended and that online payments and cash withdrawals could not be made until the issue was resolved. The email urged him to confirm his details within the next 72 hours by following a link titled "card activation." \nThe plaintiff read the above email on his mobile phone around 8:00 a.m., and believing it came from the defendant, he followed the instructions and accessed a website that was identical (a clone) to that of the defendant. On this page, he was asked to enter his login credentials to connect to the service, which he did, and he was subsequently asked to input his debit card details for the alleged activation, which he also provided. Then, to complete the process, a number was sent to his mobile phone at 8:07 a.m. from the sender ........, which he entered, and two minutes later he received a message from the same sender in English stating that the quick access code had been activated on his mobile. A few minutes later, at 8:18 a.m., he received an email from the defendant informing him of the transfer of โ‚ฌ3,000.00 from his account to account number GR ........... held at the same bank, with the beneficiary\'s details being .......... As soon as the plaintiff read this, he immediately called the defendant\'s call center and canceled his debit card, the access codes for the service ......., and locked the application .......... At the same time, he verbally submitted a request to dispute and cancel the contested transaction, and in a subsequent phone call, he also canceled his credit card. On the same day, he also sent an email to the defendant informing them in writing of the above and requesting the cancellation of the transaction and the return of the amount of โ‚ฌ3,000.00 to his account, as this transfer was not made by him but by an unknown perpetrator through electronic fraud and was not approved by him. It should also be noted that the plaintiff, as the sole beneficiary according to the aforementioned contract for using the defendant\'s Internet Banking service, never received any update via SMS or the VIBER application from the bank regarding the transaction details before its completion, nor did he receive a one-time code (OTP) to approve the contested transaction. He subsequently filed a complaint against unknown persons at the Cyber Crime Division for the crime of fraud. The defendant sent an email to the plaintiff on October 16, 2020, informing him that his request had been forwarded to the appropriate department of the bank for investigation, stating that the bank would never send him an email or SMS asking him to enter his personal data and that as of October 7, 2020, there was a notice posted for its customers regarding malicious attempts to steal personal data in the "Our News" section on ....... A month after the disputed incident, on November 10, 2020, an amount of โ‚ฌ2,296.82 was transferred to the plaintiff\'s account from the account to which the fraudulent credit had been made. The plaintiff immediately sent an email to the defendant asking to be informed whether this transfer was a return of part of the amount that had been illegally withdrawn from his account and requested the return of the remaining amount of โ‚ฌ703.18. In its response dated January 13, 2021, the defendant confirmed that the aforementioned amount indeed came from the account to which the fraudulent credit had been made, following a freeze of that account initiated by the defendant during the investigation of the incident, but refused to return the remaining amount, claiming it bore no responsibility for the leak of the personal codes to third parties, according to the terms of the service contract established between them. \nFrom the entirety of the evidence presented to the court, there is no indication of the authenticity of the contested transaction, as the plaintiff did not give his consent for the execution of the transfer of the amount of โ‚ฌ3,000.00, especially in light of the provision in Article 72 paragraph 2 of Law 4537/2018 stating that the mere use of the Internet Banking service by the plaintiff does not necessarily constitute sufficient evidence that the payer approved the payment action. Specifically, it was proven that the contested transaction was not carried out following a strong identification of the plaintiff โ€“ the sole beneficiary of the account โ€“ and his approval, as the latter may have entered his personal codes on the counterfeit website; however, he was never informed, before the completion of the contested transaction, of the amount that would be transferred from his account to a third-party account, nor did he receive on his mobile phone, either via SMS or through the VIBER application or any other means, the one-time code - extra PIN for its completion, which he was required to enter to approve the contested transaction (payment action) and thus complete his identification, a fact that was not countered by any evidence from the defendant. Furthermore, it is noted that the defendant\'s claims that it bears no responsibility under the terms of the banking services contract, whereby it is not liable for any damage to its customer in cases of unauthorized use of their personal access codes to the Internet Banking service, are to be rejected as fundamentally unfounded. This is because the aforementioned contractual terms are invalid according to the provision of Article 103 of Law 4537/2018, as they contradict the provisions of Articles 71, 73, and 92 of the same Law, which provide for the provider\'s universal liability and its exemption only for unusual and unforeseen circumstances that are beyond the control of the party invoking them and whose consequences could not have been avoided despite all efforts to the contrary; these provisions establish mandatory law in favor of users, as according to Article 103 of Law 4537/2018, payment service providers are prohibited from deviating from the provisions to the detriment of payment service users, unless the possibility of deviation is explicitly provided and they can decide to offer only more favorable terms to payment service users; the aforementioned contractual terms do not constitute more favorable terms but rather disadvantageous terms for the payment service user. In this case, however, the defendant did not prove the authenticity of the transaction and its approval by the plaintiff and did not invoke, nor did any unusual and unforeseen circumstances beyond its control, the consequences of which could not have been avoided despite all efforts to the contrary, come to light. Therefore, the contested transaction transferring the amount of โ‚ฌ3,000.00 is considered, in the absence of demonstrable consent from the plaintiff, unapproved according to the provisions of Article 64 of Law 4537/2018, and the defendant\'s contrary claims are rejected, especially since the plaintiff proceeded, according to Article 71 paragraph 1 of Law 4537/2018, without undue delay to notify the defendant regarding the contested unapproved payment action. Consequently, the defendant is liable for compensating the plaintiff for the positive damage he suffered under Article 73 of Law 4537/2018 and is obliged to pay him the requested amount of โ‚ฌ703.18, while the plaintiffโ€™s fault in the occurrence of this damage cannot be established, as he entered his personal details in an online environment that was a faithful imitation of that of the defendant, as evidenced by the comparison of the screenshots of the fake website and the real website provided by the plaintiff, a fact that he could not have known while being fully convinced that he was transacting with the defendant. Furthermore, the defendantโ€™s liability to compensate the plaintiff is based on the provision of Article 8 of Law 2251/1994, which applies in this case, as the plaintiff\'s damage resulted from inadequate fulfillment of its obligations in the context of providing its services, but also on the provision of Article 914 of the Civil Code in the sense of omission on its part of unlawfully and culpably imposed actions. In this case, given that during the relevant period there had been a multitude of similar incidents of fraud against the defendant\'s customers, the latter, as a service provider to the consumer public and bearing transactional obligations of care and security towards them, displayed gross negligence regarding the security provided for electronic transaction services, which was compromised by the fraudulent theft of funds, as it did not comply with all required high-security measures for executing the contested transaction, failing to implement the strict customer identification verification process and to check the authenticity of the account to which the funds were sent, thus not assuming the suspicious nature of the transaction, did not adopt comprehensive and improved protective measures to fully protect its customers against malicious attacks and online fraud and to prevent the infiltration of unauthorized third parties, nor did it fulfill its obligations to inform, accurately inform, and warn its consumers - customers, as it failed to adequately inform them of attempts to steal their personal data through the sending of informative emails or SMS, while merely posting in a section rather than on a central banner (as it later did) does not constitute adequate information such that it meets the requirement of protecting its customers and the increased safeguarding of their interests. Although the plaintiff acted promptly and informed the defendant on the same day about the contested incident, the defendant did not act as promptly regarding the investigation of the incident and the freezing of the account that held the fraudulent credit to prevent the plaintiff\'s loss, but only returned part of the funds to the plaintiff a month later. This behavior, beyond being culpable due to gross negligence, was also unlawful, as it would have been illegal even without the contractual relationship, as contrary to the provisions of Law 4537/2018 and Law 2251/1994, regarding the lack of security of the services that the consumer is legitimately entitled to expect, as well as the building of trust that is essential in banking transactions, elements that it was obligated to provide within the sphere of the services offered, and contrary to the principles of good faith and commercial ethics, as crystallized in the provision of Article 288 of the Civil Code, as well as the general duty imposed by Article 914 of the Civil Code not to cause harm to another culpably. This resulted not only in positive damage to the plaintiff but also in causing him moral harm consisting of his mental distress and the disruption, agitation, and sorrow he experienced, for which he must be awarded financial compensation. Taking into account all the general circumstances of the case, the extent of the plaintiff\'s damage, the severity of the defendant\'s fault, the mental distress suffered by the plaintiff, the insecurity he felt regarding his deposits, the sorrow he experienced, and the stress caused by his financial loss, which occurred during the pandemic period when his earnings from his professional activity had significantly decreased, as well as the financial and social situation of the parties, it is the court\'s opinion that he should be granted, as financial compensation for his moral harm, an amount of โ‚ฌ250.00, which is deemed reasonable and fair. Therefore, the total monetary amount that the plaintiff is entitled to for his positive damage and financial compensation for the moral harm suffered amounts to a total of (โ‚ฌ703.18 + โ‚ฌ250.00) = โ‚ฌ953.18.', "1.Without prejudice to other tasks set out under this Regulation, each supervisory authority shall on its territory: (a) monitor and enforce the application of this Regulation; (b) promote public awareness and understanding of the risks, rules, safeguards and rights in relation to processing. Activities addressed specifically to children shall receive specific attention; (c) advise, in accordance with Member State law, the national parliament, the government, and other institutions and bodies on legislative and administrative measures relating to the protection of natural persons' rights and freedoms with regard to processing; (d) promote the awareness of controllers and processors of their obligations under this Regulation; (e) upon request, provide information to any data subject concerning the exercise of their rights under this Regulation and, if appropriate, cooperate with the supervisory authorities in other Member States to that end; (f) handle complaints lodged by a data subject, or by a body, organisation or association in accordance with Article 80, and investigate, to the extent appropriate, the subject matter of the complaint and inform the complainant of the progress and the outcome of the investigation within a reasonable period, in particular if further investigation or coordination with another supervisory authority is necessary; (g) cooperate with, including sharing information and provide mutual assistance to, other supervisory authorities with a view to ensuring the consistency of application and enforcement of this Regulation; (h) conduct investigations on the application of this Regulation, including on the basis of information received from another supervisory authority or other public authority; (i) monitor relevant developments, insofar as they have an impact on the protection of personal data, in particular the development of information and communication technologies and commercial practices; (j) adopt standard contractual clauses referred to in Article 28(8) and in point (d) of Article 46(2); (k) establish and maintain a list in relation to the requirement for data protection impact assessment pursuant to Article 35(4); (l) give advice on the processing operations referred to in Article 36(2); (m) encourage the drawing up of codes of conduct pursuant to Article 40(1) and provide an opinion and approve such codes of conduct which provide sufficient safeguards, pursuant to Article 40(5); (n) encourage the establishment of data protection certification mechanisms and of data protection seals and marks pursuant to Article 42(1), and approve the criteria of certification pursuant to Article 42(5); (o) where applicable, carry out a periodic review of certifications issued in accordance with Article 42(7); 4.5.2016 L 119/68 (p) draft and publish the criteria for accreditation of a body for monitoring codes of conduct pursuant to Article 41 and of a certification body pursuant to Article 43; (q) conduct the accreditation of a body for monitoring codes of conduct pursuant to Article 41 and of a certification body pursuant to Article 43; (r) authorise contractual clauses and provisions referred to in Article 46(3); (s) approve binding corporate rules pursuant to Article 47; (t) contribute to the activities of the Board; (u) keep internal records of infringements of this Regulation and of measures taken in accordance with Article 58(2); and (v) fulfil any other tasks related to the protection of personal data.\n2.Each supervisory authority shall facilitate the submission of complaints referred to in point (f) of paragraph 1 by measures such as a complaint submission form which can also be completed electronically, without excluding other means of communication.\n3.The performance of the tasks of each supervisory authority shall be free of charge for the data subject and, where applicable, for the data protection officer.\n4.Where requests are manifestly unfounded or excessive, in particular because of their repetitive character, the supervisory authority may charge a reasonable fee based on administrative costs, or refuse to act on the request. The supervisory authority shall bear the burden of demonstrating the manifestly unfounded or excessive character of the request.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.4739, 0.1927], # [0.4739, 1.0000, 0.2989], # [0.1927, 0.2989, 1.0000]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 768 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.3561 | | cosine_accuracy@3 | 0.3813 | | cosine_accuracy@5 | 0.4343 | | cosine_accuracy@10 | 0.4975 | | cosine_precision@1 | 0.3561 | | cosine_precision@3 | 0.3409 | | cosine_precision@5 | 0.3157 | | cosine_precision@10 | 0.2722 | | cosine_recall@1 | 0.0834 | | cosine_recall@3 | 0.2122 | | cosine_recall@5 | 0.2805 | | cosine_recall@10 | 0.3963 | | **cosine_ndcg@10** | **0.4201** | | cosine_mrr@10 | 0.3859 | | cosine_map@100 | 0.467 | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 512 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.351 | | cosine_accuracy@3 | 0.3889 | | cosine_accuracy@5 | 0.4394 | | cosine_accuracy@10 | 0.4924 | | cosine_precision@1 | 0.351 | | cosine_precision@3 | 0.3401 | | cosine_precision@5 | 0.3162 | | cosine_precision@10 | 0.2747 | | cosine_recall@1 | 0.0812 | | cosine_recall@3 | 0.211 | | cosine_recall@5 | 0.2783 | | cosine_recall@10 | 0.389 | | **cosine_ndcg@10** | **0.4177** | | cosine_mrr@10 | 0.3825 | | cosine_map@100 | 0.4658 | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 256 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.3434 | | cosine_accuracy@3 | 0.3813 | | cosine_accuracy@5 | 0.4066 | | cosine_accuracy@10 | 0.4823 | | cosine_precision@1 | 0.3434 | | cosine_precision@3 | 0.3342 | | cosine_precision@5 | 0.3106 | | cosine_precision@10 | 0.2692 | | cosine_recall@1 | 0.0771 | | cosine_recall@3 | 0.1982 | | cosine_recall@5 | 0.2634 | | cosine_recall@10 | 0.3825 | | **cosine_ndcg@10** | **0.4062** | | cosine_mrr@10 | 0.3728 | | cosine_map@100 | 0.455 | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 128 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.3434 | | cosine_accuracy@3 | 0.3687 | | cosine_accuracy@5 | 0.3965 | | cosine_accuracy@10 | 0.4571 | | cosine_precision@1 | 0.3434 | | cosine_precision@3 | 0.3274 | | cosine_precision@5 | 0.302 | | cosine_precision@10 | 0.2598 | | cosine_recall@1 | 0.0766 | | cosine_recall@3 | 0.1919 | | cosine_recall@5 | 0.2522 | | cosine_recall@10 | 0.3588 | | **cosine_ndcg@10** | **0.3918** | | cosine_mrr@10 | 0.3658 | | cosine_map@100 | 0.4479 | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 64 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.2778 | | cosine_accuracy@3 | 0.298 | | cosine_accuracy@5 | 0.3359 | | cosine_accuracy@10 | 0.3914 | | cosine_precision@1 | 0.2778 | | cosine_precision@3 | 0.266 | | cosine_precision@5 | 0.249 | | cosine_precision@10 | 0.2121 | | cosine_recall@1 | 0.0611 | | cosine_recall@3 | 0.1586 | | cosine_recall@5 | 0.2205 | | cosine_recall@10 | 0.3035 | | **cosine_ndcg@10** | **0.3248** | | cosine_mrr@10 | 0.3 | | cosine_map@100 | 0.3815 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 1,580 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 15.29 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 361.72 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | anchor | positive | |:-------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What date is mentioned in the text?</code> | <code>1.The controller and the processor shall ensure that the data protection officer is involved, properly and in a timely manner, in all issues which relate to the protection of personal data. 4.5.2016 L 119/55 <br>2.The controller and processor shall support the data protection officer in performing the tasks referred to in</code> | | <code>Under what condition is the culpable character of the action raised in regards to computer software infringement?</code> | <code>Any person who, in contravention of the provisions of this law or of the provisions of lawfully ratified multilateral international conventions on the protection of copyright, unlawfully makes a fixation of a work or of copies, reproduces them directly or indirectly, temporarily or permanently in any form, in whole or in part, translates, adapts, alters or transforms them, or distributes them to the public by sale or other means, or possesses with the intent of distributing them, rents, performs in public, broadcasts by radio or television or any other means, communicates to the public works or copies by any means, imports copies of a work illegally produced abroad without the consent of the author and, in general, exploits works, reproductions or copies being the object of copyright or acts against the moral right of the author to decide freely on the publication and the presentation of his work to the public without additions or deletions, shall be liable to imprisonment of no less t...</code> | | <code>Under what circumstances does the Board issue an opinion?</code> | <code>1.The Board shall issue an opinion where a competent supervisory authority intends to adopt any of the measures below. To that end, the competent supervisory authority shall communicate the draft decision to the Board, when it: (a) aims to adopt a list of the processing operations subject to the requirement for a data protection impact assessment pursuant to Article 35(4); (b) concerns a matter pursuant to Article 40(7) whether a draft code of conduct or an amendment or extension to a code of conduct complies with this Regulation; 4.5.2016 L 119/73 (c) aims to approve the criteria for accreditation of a body pursuant to Article 41(3) or a certification body pursuant to Article 43(3); (d) aims to determine standard data protection clauses referred to in point (d) of Article 46(2) and in Article 28(8); (e) aims to authorise contractual clauses referred to in point (a) of Article 46(3); or (f) aims to approve binding corporate rules within the meaning of Article 47<br>2.Any superviso...</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `gradient_accumulation_steps`: 4 - `learning_rate`: 3e-05 - `num_train_epochs`: 20 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 4 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 20 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:-------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | -1 | -1 | - | 0.1407 | 0.1159 | 0.1664 | 0.1436 | 0.0873 | | 0.2020 | 10 | 33.0406 | - | - | - | - | - | | 0.4040 | 20 | 33.881 | - | - | - | - | - | | 0.6061 | 30 | 32.1639 | - | - | - | - | - | | 0.8081 | 40 | 33.3136 | - | - | - | - | - | | 1.0 | 50 | 29.8675 | 0.1560 | 0.1476 | 0.1983 | 0.1695 | 0.1143 | | 1.2020 | 60 | 30.2009 | - | - | - | - | - | | 1.4040 | 70 | 31.2315 | - | - | - | - | - | | 1.6061 | 80 | 29.9391 | - | - | - | - | - | | 1.8081 | 90 | 26.5559 | - | - | - | - | - | | 2.0 | 100 | 24.5218 | 0.2594 | 0.2418 | 0.2565 | 0.2544 | 0.1866 | | 2.2020 | 110 | 24.1179 | - | - | - | - | - | | 2.4040 | 120 | 21.4049 | - | - | - | - | - | | 2.6061 | 130 | 20.8776 | - | - | - | - | - | | 2.8081 | 140 | 19.4587 | - | - | - | - | - | | 3.0 | 150 | 16.968 | 0.3429 | 0.3146 | 0.3069 | 0.3138 | 0.2469 | | 3.2020 | 160 | 16.8039 | - | - | - | - | - | | 3.4040 | 170 | 16.0707 | - | - | - | - | - | | 3.6061 | 180 | 15.3223 | - | - | - | - | - | | 3.8081 | 190 | 16.0491 | - | - | - | - | - | | 4.0 | 200 | 15.5165 | 0.3616 | 0.3459 | 0.3508 | 0.3295 | 0.2842 | | 4.2020 | 210 | 14.3943 | - | - | - | - | - | | 4.4040 | 220 | 14.2748 | - | - | - | - | - | | 4.6061 | 230 | 12.8711 | - | - | - | - | - | | 4.8081 | 240 | 12.5741 | - | - | - | - | - | | 5.0 | 250 | 13.7759 | 0.3853 | 0.3750 | 0.3721 | 0.3622 | 0.2984 | | 5.2020 | 260 | 10.9699 | - | - | - | - | - | | 5.4040 | 270 | 11.5325 | - | - | - | - | - | | 5.6061 | 280 | 11.4495 | - | - | - | - | - | | 5.8081 | 290 | 12.2022 | - | - | - | - | - | | 6.0 | 300 | 11.2322 | 0.3915 | 0.3850 | 0.3863 | 0.3719 | 0.3183 | | 6.2020 | 310 | 10.8115 | - | - | - | - | - | | 6.4040 | 320 | 10.7632 | - | - | - | - | - | | 6.6061 | 330 | 10.459 | - | - | - | - | - | | 6.8081 | 340 | 9.323 | - | - | - | - | - | | 7.0 | 350 | 9.6717 | 0.4037 | 0.3917 | 0.3993 | 0.3722 | 0.3064 | | 7.2020 | 360 | 9.1543 | - | - | - | - | - | | 7.4040 | 370 | 10.0379 | - | - | - | - | - | | 7.6061 | 380 | 9.5019 | - | - | - | - | - | | 7.8081 | 390 | 7.854 | - | - | - | - | - | | **8.0** | **400** | **9.0798** | **0.4201** | **0.4177** | **0.4062** | **0.3918** | **0.3248** | | 8.2020 | 410 | 10.2511 | - | - | - | - | - | | 8.4040 | 420 | 8.6804 | - | - | - | - | - | | 8.6061 | 430 | 8.4118 | - | - | - | - | - | | 8.8081 | 440 | 8.1008 | - | - | - | - | - | | 9.0 | 450 | 6.7377 | 0.4181 | 0.4177 | 0.4098 | 0.3880 | 0.3335 | | 9.2020 | 460 | 7.6816 | - | - | - | - | - | | 9.4040 | 470 | 8.7339 | - | - | - | - | - | | 9.6061 | 480 | 8.4428 | - | - | - | - | - | | 9.8081 | 490 | 8.5048 | - | - | - | - | - | | 10.0 | 500 | 8.0388 | 0.4153 | 0.4173 | 0.4063 | 0.3882 | 0.3197 | | -1 | -1 | - | 0.4201 | 0.4177 | 0.4062 | 0.3918 | 0.3248 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.1.0 - Transformers: 4.51.3 - PyTorch: 2.8.0+cu126 - Accelerate: 1.10.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
moyixiao/Qwen3-0.6B-bnpo7-f16-150
moyixiao
2025-09-15T22:26:28Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T22:26:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757975028
svarekagerp
2025-09-15T22:24:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing reptilian bee", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T22:24:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing reptilian bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bugkiller2025/smol_Thinking
bugkiller2025
2025-09-15T22:24:02Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:HuggingFaceTB/SmolVLM-Base", "base_model:finetune:HuggingFaceTB/SmolVLM-Base", "endpoints_compatible", "region:us" ]
null
2025-09-15T22:23:58Z
--- base_model: HuggingFaceTB/SmolVLM-Base library_name: transformers model_name: smol_Thinking tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for smol_Thinking This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-Base](https://huggingface.co/HuggingFaceTB/SmolVLM-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="bugkiller2025/smol_Thinking", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
smoorsmith/softmasking_coding_1
smoorsmith
2025-09-15T22:22:16Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:smoorsmith/Dream-Coder-v0-Instruct-7B-Transparent-Masking", "base_model:adapter:smoorsmith/Dream-Coder-v0-Instruct-7B-Transparent-Masking", "region:us" ]
null
2025-09-15T22:12:39Z
--- base_model: smoorsmith/Dream-Coder-v0-Instruct-7B-Transparent-Masking library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
smoorsmith/softmasking_coding_2
smoorsmith
2025-09-15T22:21:04Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:smoorsmith/Dream-Coder-v0-Instruct-7B-Transparent-Masking", "base_model:adapter:smoorsmith/Dream-Coder-v0-Instruct-7B-Transparent-Masking", "region:us" ]
null
2025-09-15T22:12:50Z
--- base_model: smoorsmith/Dream-Coder-v0-Instruct-7B-Transparent-Masking library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
yufeng1/OpenThinker-7B-reasoning-lora-merged-type3-e1-2
yufeng1
2025-09-15T22:20:47Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T22:20:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Julleim-i1-GGUF
mradermacher
2025-09-15T22:18:41Z
6
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:trashpanda-org/Julleim", "base_model:quantized:trashpanda-org/Julleim", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-14T07:34:28Z
--- base_model: trashpanda-org/Julleim language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/trashpanda-org/Julleim <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Julleim-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Julleim-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/Julleim-i1-GGUF/resolve/main/Julleim.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
smoorsmith/softmasking_coding_4
smoorsmith
2025-09-15T22:18:05Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:smoorsmith/Dream-Coder-v0-Instruct-7B-Transparent-Masking", "base_model:adapter:smoorsmith/Dream-Coder-v0-Instruct-7B-Transparent-Masking", "region:us" ]
null
2025-09-15T22:13:03Z
--- base_model: smoorsmith/Dream-Coder-v0-Instruct-7B-Transparent-Masking library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
Reihaneh/wav2vec2_ur_hi_50_epochs_5
Reihaneh
2025-09-15T22:15:32Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-15T22:15:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757974413
svarekagerp
2025-09-15T22:15:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing reptilian bee", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T22:14:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing reptilian bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
facu1321/facu1321
facu1321
2025-09-15T22:14:42Z
1
0
null
[ "license:other", "region:us" ]
null
2024-11-09T06:27:17Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
POPIKZ/popiks
POPIKZ
2025-09-15T22:14:05Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-15T22:14:05Z
--- license: apache-2.0 ---
AdoCleanCode/TBD-LLaMA-DAC-Denoiser-checkpoint-12600
AdoCleanCode
2025-09-15T22:13:27Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T22:12:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
schuler/experimental-DISTIL-32
schuler
2025-09-15T22:11:41Z
14
0
transformers
[ "transformers", "safetensors", "kphi3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-14T03:01:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
csikasote/mms-1b-all-bemgen-combined-m50f100-42-DAT-4e-1
csikasote
2025-09-15T22:10:08Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "bemgen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-15T21:17:38Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - bemgen - mms - generated_from_trainer model-index: - name: mms-1b-all-bemgen-combined-m50f100-42-DAT-4e-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-bemgen-combined-m50f100-42-DAT-4e-1 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset. It achieves the following results on the evaluation set: - Loss: 0.2721 - Cer: 0.0767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 3.56 | 0.5618 | 100 | 3.0027 | 1.0 | | 1.1897 | 1.1236 | 200 | 0.9603 | 0.2119 | | 0.7163 | 1.6854 | 300 | 0.3618 | 0.1030 | | 0.6309 | 2.2472 | 400 | 0.3283 | 0.0939 | | 0.696 | 2.8090 | 500 | 0.3058 | 0.0877 | | 0.6895 | 3.3708 | 600 | 0.2999 | 0.0847 | | 0.7015 | 3.9326 | 700 | 0.2938 | 0.0802 | | 0.6893 | 4.4944 | 800 | 0.2888 | 0.0811 | | 0.6711 | 5.0562 | 900 | 0.2887 | 0.0829 | | 0.6836 | 5.6180 | 1000 | 0.2900 | 0.0783 | | 0.6954 | 6.1798 | 1100 | 0.2844 | 0.0796 | | 0.6897 | 6.7416 | 1200 | 0.2880 | 0.0797 | | 0.6496 | 7.3034 | 1300 | 0.2844 | 0.0789 | | 0.6582 | 7.8652 | 1400 | 0.2806 | 0.0785 | | 0.6335 | 8.4270 | 1500 | 0.2778 | 0.0785 | | 0.6408 | 8.9888 | 1600 | 0.2721 | 0.0767 | | 0.6096 | 9.5506 | 1700 | 0.2750 | 0.0770 | | 0.6108 | 10.1124 | 1800 | 0.2758 | 0.0783 | | 0.6535 | 10.6742 | 1900 | 0.2761 | 0.0789 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-1e-4-v3_4301
luckeciano
2025-09-15T22:10:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T17:37:48Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-1e-4-v3_4301 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-1e-4-v3_4301 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-1e-4-v3_4301", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/kupmok8h) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yufeng1/OpenThinker-7B-reasoning-lora-merged-type3-e3-2
yufeng1
2025-09-15T22:10:03Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T22:09:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_183
ChenWu98
2025-09-15T22:07:29Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048", "base_model:finetune:ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048", "endpoints_compatible", "region:us" ]
null
2025-09-15T06:32:48Z
--- base_model: ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048 library_name: transformers model_name: numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_183 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_183 This model is a fine-tuned version of [ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048](https://huggingface.co/ChenWu98/numina_qwen_2.5_3b_sft_teachers_no_reasoning_source_condition_2048). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/kuamgsqk) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
locuslab/safelm-1.7b_rephrase_refusal_moral_ed_600B
locuslab
2025-09-15T22:04:27Z
10
0
null
[ "pytorch", "llama", "model", "transformer", "smollm2", "safety p", "dataset:locuslab/refuseweb", "dataset:locuslab/safeweb", "dataset:locuslab/moral_education", "dataset:HuggingFaceTB/smollm-corpus", "arxiv:2504.16980", "license:mit", "region:us" ]
null
2025-04-22T22:38:33Z
--- version: main family: smollm2-1.7b model_name: locuslab/safelm-1.7b_rephrase_refusal_moral_ed_600B license: mit tags: - model - transformer - smollm2 - safety p datasets: - locuslab/refuseweb - locuslab/safeweb - locuslab/moral_education - HuggingFaceTB/smollm-corpus --- # SafeLM-1.7B SafeLM is a 1.7B parameter model family that is trained via [Safety Pretraining](https://www.arxiv.org/abs/2504.16980). We train language models to be natively safe by incorporating safety directly into the pretraining pipeline. This is our natively safe base model. Our safety data curation involves scoring harmful content, rephrasing and contextualizing potentially harmful examples, and refusal training throughout pretraining. Please check out our [paper](https://www.arxiv.org/abs/2504.16980) and [website](https://locuslab.github.io/safety-pretraining/) for more details! ## Model Details - **Architecture:** SmolLM2 - **Parameters:** 1.7B ## Training Configuration ```yaml optimizer: class_path: torch.optim.AdamW init_args: lr: 0.0005 weight_decay: 0.01 precision: bf16-mixed seed: 42 train: global_batch_size: 1024 max_seq_length: 2048 max_tokens: 600000000000 micro_batch_size: 8 ``` ## Quickstart ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("locuslab/safelm-1.7b_rephrase_refusal_moral_ed_600B") tokenizer = AutoTokenizer.from_pretrained("locuslab/safelm-1.7b_rephrase_refusal_moral_ed_600B") ``` ## Citation If you find our work helpful, please cite our work as: ``` @article{maini2025safety, title={Safety pretraining: Toward the next generation of safe ai}, author={Maini, Pratyush and Goyal, Sachin and Sam, Dylan and Robey, Alex and Savani, Yash and Jiang, Yiding and Zou, Andy and Lipton, Zachary C and Kolter, J Zico}, journal={arXiv preprint arXiv:2504.16980}, year={2025} } ```
vefrwb/blockassist
vefrwb
2025-09-15T22:00:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bipedal rabid scorpion", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T21:47:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bipedal rabid scorpion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
uwcc/cartoonDoodle
uwcc
2025-09-15T21:57:51Z
10
1
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-12T04:09:36Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: A church in a field on a sunny day, [trigger] style. output: url: samples/1757973381177__000004000_0.jpg - text: A seal plays with a ball on the beach, [trigger] style. output: url: samples/1757973399320__000004000_1.jpg - text: A clown at the circus rides on a zebra, [trigger] style. output: url: samples/1757973417486__000004000_2.jpg - text: '[trigger]' output: url: samples/1757973435643__000004000_3.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: cartoonDoodle license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # cartoonDoodle Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `cartoonDoodle` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/uwcc/cartoonDoodle/tree/main) them in the Files & versions tab. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('uwcc/cartoonDoodle', weight_name='cartoonDoodle.safetensors') image = pipeline('A church in a field on a sunny day, [trigger] style.').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
smartvest-llc/gemma-3-270m-it
smartvest-llc
2025-09-15T21:55:50Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "gemma3", "gemma", "google", "conversational", "arxiv:2503.19786", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:2311.07911", "arxiv:2311.12022", "arxiv:2411.04368", "arxiv:1904.09728", "arxiv:1903.00161", "arxiv:2009.03300", "arxiv:2304.06364", "arxiv:2103.03874", "arxiv:2110.14168", "arxiv:2108.07732", "arxiv:2107.03374", "arxiv:2403.07974", "arxiv:2305.03111", "arxiv:2405.04520", "arxiv:2210.03057", "arxiv:2106.03193", "arxiv:1910.11856", "arxiv:2502.12404", "arxiv:2502.21228", "arxiv:2404.16816", "arxiv:2104.12756", "arxiv:2311.16502", "arxiv:2203.10244", "arxiv:2404.12390", "arxiv:1810.12440", "arxiv:1908.02660", "arxiv:2310.02255", "arxiv:2312.11805", "base_model:google/gemma-3-270m", "base_model:finetune:google/gemma-3-270m", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T21:55:41Z
--- base_model: google/gemma-3-270m license: gemma tags: - gemma3 - gemma - google pipeline_tag: text-generation library_name: transformers extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, youโ€™re required to review and agree to Googleโ€™s usage license. To do this, please ensure youโ€™re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each, for the 4B, 12B, and 27B sizes. - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B and 270M sizes. - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context up to 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B and 270M sizes per request, subtracting the request input tokens ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://arxiv.org/abs/2503.19786}, publisher={Google DeepMind}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens, the 1B with 2 trillion tokens, and the 270M with 6 trillion tokens. The knowledge cutoff date for the training data was August 2024. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation. Evaluation results marked with **IT** are for instruction-tuned models. Evaluation results marked with **PT** are for pre-trained models. #### Gemma 3 270M | **Benchmark** | **n-shot** | **Gemma 3 PT 270M** | | :------------------------ | :-----------: | ------------------: | | [HellaSwag][hellaswag] | 10-shot | 40.9 | | [BoolQ][boolq] | 0-shot | 61.4 | | [PIQA][piqa] | 0-shot | 67.7 | | [TriviaQA][triviaqa] | 5-shot | 15.4 | | [ARC-c][arc] | 25-shot | 29.0 | | [ARC-e][arc] | 0-shot | 57.7 | | [WinoGrande][winogrande] | 5-shot | 52.0 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [triviaqa]: https://arxiv.org/abs/1705.03551 [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 | **Benchmark** | **n-shot** | **Gemma 3 IT 270m** | | :------------------------ | :-----------: | ------------------: | | [HellaSwag][hellaswag] | 0-shot | 37.7 | | [PIQA][piqa] | 0-shot | 66.2 | | [ARC-c][arc] | 0-shot | 28.2 | | [WinoGrande][winogrande] | 0-shot | 52.3 | | [BIG-Bench Hard][bbh] | few-shot | 26.7 | | [IF Eval][ifeval] | 0-shot | 51.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [bbh]: https://paperswithcode.com/dataset/bbh [ifeval]: https://arxiv.org/abs/2311.07911 #### Gemma 3 1B, 4B, 12B & 27B ##### Reasoning and factuality | Benchmark | n-shot | Gemma 3 IT 1B | Gemma 3 IT 4B | Gemma 3 IT 12B | Gemma 3 IT 27B | |--------------------------------|--------|:-------------:|:-------------:|:--------------:|:--------------:| | [GPQA][gpqa] Diamond | 0-shot | 19.2 | 30.8 | 40.9 | 42.4 | | [SimpleQA][simpleqa] | 0-shot | 2.2 | 4.0 | 6.3 | 10.0 | | [FACTS Grounding][facts-grdg] | - | 36.4 | 70.1 | 75.8 | 74.9 | | [BIG-Bench Hard][bbh] | 0-shot | 39.1 | 72.2 | 85.7 | 87.6 | | [BIG-Bench Extra Hard][bbeh] | 0-shot | 7.2 | 11.0 | 16.3 | 19.3 | | [IFEval][ifeval] | 0-shot | 80.2 | 90.2 | 88.9 | 90.4 | | Benchmark | n-shot | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [gpqa]: https://arxiv.org/abs/2311.12022 [simpleqa]: https://arxiv.org/abs/2411.04368 [facts-grdg]: https://goo.gle/FACTS_paper [bbeh]: https://github.com/google-deepmind/bbeh [ifeval]: https://arxiv.org/abs/2311.07911 [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 ##### STEM and code | Benchmark | n-shot | Gemma 3 IT 1B | Gemma 3 IT 4B | Gemma 3 IT 12B | Gemma 3 IT 27B | |----------------------------|--------|:-------------:|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] (Pro) | 0-shot | 14.7 | 43.6 | 60.6 | 67.5 | | [LiveCodeBench][lcb] | 0-shot | 1.9 | 12.6 | 24.6 | 29.7 | | [Bird-SQL][bird-sql] (dev) | - | 6.4 | 36.3 | 47.9 | 54.4 | | [Math][math] | 0-shot | 48.0 | 75.6 | 83.8 | 89.0 | | HiddenMath | 0-shot | 15.8 | 43.0 | 54.5 | 60.3 | | [MBPP][mbpp] | 3-shot | 35.2 | 63.2 | 73.0 | 74.4 | | [HumanEval][humaneval] | 0-shot | 41.5 | 71.3 | 85.4 | 87.8 | | [Natural2Code][nat2code] | 0-shot | 56.0 | 70.3 | 80.7 | 84.5 | | [GSM8K][gsm8k] | 0-shot | 62.8 | 89.2 | 94.4 | 95.9 | | Benchmark | n-shot | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 [lcb]: https://arxiv.org/abs/2403.07974 [bird-sql]: https://arxiv.org/abs/2305.03111 [nat2code]: https://arxiv.org/abs/2405.04520 #### Multilingual | Benchmark | n-shot | Gemma 3 IT 1B | Gemma 3 IT 4B | Gemma 3 IT 12B | Gemma 3 IT 27B | |--------------------------------------|--------|:-------------:|:-------------:|:--------------:|:--------------:| | [Global-MMLU-Lite][global-mmlu-lite] | 0-shot | 34.2 | 54.5 | 69.5 | 75.1 | | [ECLeKTic][eclektic] | 0-shot | 1.4 | 4.6 | 10.3 | 16.7 | | [WMT24++][wmt24pp] | 0-shot | 35.9 | 46.8 | 51.6 | 53.4 | | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 ##### Multimodal | Benchmark | Gemma 3 IT 4B | Gemma 3 IT 12B | Gemma 3 IT 27B | |-----------------------------------|:-------------:|:--------------:|:--------------:| | [MMMU][mmmu] (val) | 48.8 | 59.6 | 64.9 | | [DocVQA][docvqa] | 75.8 | 87.1 | 86.6 | | [InfoVQA][info-vqa] | 50.0 | 64.9 | 70.6 | | [TextVQA][textvqa] | 57.8 | 67.7 | 65.1 | | [AI2D][ai2d] | 74.8 | 84.2 | 84.5 | | [ChartQA][chartqa] | 68.8 | 75.7 | 78.0 | | [VQAv2][vqav2] (val) | 62.4 | 71.6 | 71.0 | | [MathVista][mathvista] (testmini) | 50.0 | 62.9 | 67.6 | | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ [mathvista]: https://arxiv.org/abs/2310.02255 ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://arxiv.org/abs/2503.19786 [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
mlx-community/Ring-mini-2.0-8bit
mlx-community
2025-09-15T21:55:47Z
0
0
mlx
[ "mlx", "safetensors", "bailing_moe", "text-generation", "conversational", "custom_code", "base_model:inclusionAI/Ring-mini-2.0", "base_model:quantized:inclusionAI/Ring-mini-2.0", "license:mit", "8-bit", "region:us" ]
text-generation
2025-09-15T14:50:02Z
--- license: mit base_model: inclusionAI/Ring-mini-2.0 pipeline_tag: text-generation library_name: mlx tags: - mlx --- # mlx-community/Ring-mini-2.0-8bit This model [mlx-community/Ring-mini-2.0-8bit](https://huggingface.co/mlx-community/Ring-mini-2.0-8bit) was converted to MLX format from [inclusionAI/Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0) using mlx-lm version **0.27.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Ring-mini-2.0-8bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757973180
svarekagerp
2025-09-15T21:54:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing reptilian bee", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T21:54:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing reptilian bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fpadovani/cds_shuffle_1gram_13
fpadovani
2025-09-15T21:52:25Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T21:02:56Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: cds_shuffle_1gram_13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cds_shuffle_1gram_13 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 13 - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 492 | 3.9801 | | 4.5748 | 2.0 | 984 | 3.7338 | | 3.5529 | 3.0 | 1476 | 3.6244 | | 3.3273 | 4.0 | 1968 | 3.5657 | | 3.2028 | 5.0 | 2460 | 3.5403 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.0
abnormalmapstudio/Qwen3-Next-80B-A3B-Thinking-8bit-mlx
abnormalmapstudio
2025-09-15T21:51:20Z
0
0
mlx
[ "mlx", "qwen3_next", "8-bit", "affine", "text-generation", "base_model:Qwen/Qwen3-Next-80B-A3B-Thinking", "base_model:finetune:Qwen/Qwen3-Next-80B-A3B-Thinking", "license:apache-2.0", "region:us" ]
text-generation
2025-09-15T21:50:10Z
--- license: apache-2.0 library_name: mlx pipeline_tag: text-generation base_model: Qwen/Qwen3-Next-80B-A3B-Thinking tags: - mlx - qwen3_next - 8-bit - affine - text-generation quantization_config: bits: 8 mode: affine group_size: 64 model-index: - name: Qwen3-Next-80B-A3B-Thinking 8-bit (MLX) results: [] --- # Qwen3-Next-80B-A3B-Thinking โ€” MLX 8-bit (affine) Apple MLX-optimized 8-bit affine-quantized checkpoint of the base model `Qwen/Qwen3-Next-80B-A3B-Thinking` for local inference on Apple Silicon. Key details - Format: MLX runtime, safetensors sharded weights - Quantization: affine int8, group_size=64 - Task: text generation / chat - Tokenizer: provided via `tokenizer.json` (BPE) with `chat_template.jinja` ## Usage (MLX) ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate repo_id = "abnormalmapstudio/Qwen3-Next-80B-A3B-Thinking-8bit-mlx" model, tokenizer = load(repo_id) out = generate(model, tokenizer, "List 5 creative dinner ideas.", max_tokens=200) print(out) ``` ## Benchmarks - Will be added after upload completes; see `scripts/bench/qwen_mxfp4_vs_int4.py` and `scripts/bench/model_queue_eval.py`. ## License - Apache-2.0 for this packaging. See `LICENSE`. - Base model license and terms apply (Qwen/Qwen3-Next-80B-A3B-Thinking).
haznitrama/babybabellm-gpt_bert-kor-ema
haznitrama
2025-09-15T21:50:43Z
0
0
null
[ "safetensors", "gpt_bert", "custom_code", "region:us" ]
null
2025-09-15T21:40:22Z
# haznitrama/babybabellm-gpt_bert-kor-ema Converted GPT-BERT style model (variant: ema) for language **kor**. Weights stored using **safetensors** (no pickle). ## Configuration ```json { "attention_probs_dropout_prob": 0.1, "hidden_dropout_prob": 0.1, "hidden_size": 384, "intermediate_size": 1280, "max_position_embeddings": 512, "position_bucket_size": 32, "num_attention_heads": 6, "num_hidden_layers": 12, "vocab_size": 8192, "layer_norm_eps": 1e-05, "auto_map": { "AutoConfig": "configuration_gpt_bert.GPTBertConfig", "AutoModelForMaskedLM": "modeling_gpt_bert.GPTBertForMaskedLM" }, "return_dict": true, "output_hidden_states": false, "torchscript": false, "dtype": "float32", "pruned_heads": {}, "tie_word_embeddings": true, "chunk_size_feed_forward": 0, "is_encoder_decoder": false, "is_decoder": false, "cross_attention_hidden_size": null, "add_cross_attention": false, "tie_encoder_decoder": false, "architectures": [ "GPTBertForMaskedLM" ], "finetuning_task": null, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "task_specific_params": null, "problem_type": null, "tokenizer_class": null, "prefix": null, "bos_token_id": null, "pad_token_id": null, "eos_token_id": null, "sep_token_id": null, "decoder_start_token_id": null, "max_length": 20, "min_length": 0, "do_sample": false, "early_stopping": false, "num_beams": 1, "num_beam_groups": 1, "diversity_penalty": 0.0, "temperature": 1.0, "top_k": 50, "top_p": 1.0, "typical_p": 1.0, "repetition_penalty": 1.0, "length_penalty": 1.0, "no_repeat_ngram_size": 0, "encoder_no_repeat_ngram_size": 0, "bad_words_ids": null, "num_return_sequences": 1, "output_scores": false, "return_dict_in_generate": false, "forced_bos_token_id": null, "forced_eos_token_id": null, "remove_invalid_values": false, "exponential_decay_length_penalty": null, "suppress_tokens": null, "begin_suppress_tokens": null, "_name_or_path": "", "transformers_version": "4.56.1", "tf_legacy_loss": false, "use_bfloat16": false, "model_type": "gpt_bert", "output_attentions": false } ``` Tokenizer file: `tokenizer_kor_vs8192.json` ## Usage ```python from transformers import AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained('haznitrama/babybabellm-gpt_bert-kor-ema', trust_remote_code=True) tok = AutoTokenizer.from_pretrained('haznitrama/babybabellm-gpt_bert-kor-ema') ids = tok('Hello world', return_tensors='pt') out = model(**ids) ``` ## Notes - Converted on 2025-09-15T21:50:39.022129Z - Safe serialization enabled (model.safetensors) - This repository includes custom modeling code; set `trust_remote_code=True`.
TeetouchQQ/exp-model7-augment
TeetouchQQ
2025-09-15T21:44:53Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-09-15T21:44:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757972565
svarekagerp
2025-09-15T21:44:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing reptilian bee", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T21:43:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing reptilian bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Amie69/deberta-rm
Amie69
2025-09-15T21:43:10Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "reward-trainer", "trl", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-15T21:25:02Z
--- base_model: microsoft/deberta-v3-base library_name: transformers model_name: reward_model tags: - generated_from_trainer - reward-trainer - trl licence: license --- # Model Card for reward_model This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with Reward. ### Framework versions - TRL: 0.22.2 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nightmedia/Qwen3-30B-A3B-YOYO-V3-mxfp4-mlx
nightmedia
2025-09-15T21:39:04Z
0
0
mlx
[ "mlx", "safetensors", "qwen3_moe", "merge", "text-generation", "conversational", "en", "zh", "base_model:YOYO-AI/Qwen3-30B-A3B-YOYO-V3", "base_model:quantized:YOYO-AI/Qwen3-30B-A3B-YOYO-V3", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-09-15T13:30:53Z
--- license: apache-2.0 language: - en - zh base_model: YOYO-AI/Qwen3-30B-A3B-YOYO-V3 pipeline_tag: text-generation tags: - merge - mlx library_name: mlx --- # Qwen3-30B-A3B-YOYO-V3-mxfp4-mlx Where Qwen3-30B-A3B-YOYO-V3-mxfp4 sits in the performance spectrum compared to: ```bash The base Thinking model (Qwen3-30B-A3B-Thinking-2507-bf16) The base Coder model (unsloth-Qwen3-Coder-30B-A3B-Instruct-qx6) The best V2 model (Qwen3-30B-A3B-YOYO-V2-qx6-hi) ``` Key Metrics ```bash Model ARC Challenge ARC Easy BoolQ HellaSwag OpenBookQA PIQA Winogrande V3-mxfp4 0.464 0.541 0.875 0.692 0.422 0.779 0.639 Base Thinking(bf16) 0.421 0.448 0.682 0.635 0.402 0.771 0.669 Base Coder (qx6) 0.422 0.532 0.881 0.546 0.432 0.724 0.576 Best V2 (qx6-hi) 0.531 0.690 0.885 0.685 0.448 0.785 0.646 ``` V3-mxfp4 compared to the Three Reference Models === We'll calculate average improvement (in percentage points) across all 7 metrics: ```bash A V3-mxfp4 vs. Thinking (bf16) B V3-mxfp4 vs. Coder (qx6) C V3-mxfp4 vs. V2 (qx6-hi) Metric A(Thinking) B(Coder) C(V2) ARC Challenge +0.043 +0.042 -0.067 ARC Easy +0.093 +0.009 -0.149 BoolQ +0.193 -0.006 -0.010 HellaSwag +0.057 +0.146 +0.007 OpenBookQA +0.020 -0.010 -0.026 PIQA +0.008 +0.055 -0.006 Winogrande -0.030 +0.063 -0.007 ``` Average Performance Position ```bash Comparison Avg. Improvement V3-mxfp4 vs. Thinking (bf16) +0.057 pp V3-mxfp4 vs. Coder (qx6) +0.038 pp V3-mxfp4 vs. V2 (qx6-hi) -0.053 pp ``` This means: ```bash V3-mxfp4 is ~5.7 pp better than the base Thinking model (on average). V3-mxfp4 is ~3.8 pp better than the base Coder model (on average). V3-mxfp4 is ~5.3 pp worse than the V2 model (on average). ``` Interpretation of Position ```bash Model Type V3-mxfp4 Performance vs. Reference Base Thinking Model โœ… Significantly better (avg. +5.7 pp) Base Coder Model โœ… Slightly better (avg. +3.8 pp) V2 Model โŒ Slightly worse (avg. -5.3 pp) ``` Summary === The V3-mxfp4 model: Is better than both base models, confirming it is a meaningful upgrade. Is slightly worse than the V2 model, but this is expected since the V2 was optimized for high performance. ๐Ÿ“Œ Average Position as a Hybrid Model: ```bash It is ~5.7 pp better than Thinking It is ~3.8 pp better than Coder It is ~5.3 pp worse than V2 ``` Qwen3-30B-A3B-YOYO-V3-mxfp4 compared with Qwen3-30B-A3B-Thinking-2507-bf16 === Performance Results ```bash Metric Change Significance ARC Challenge +0.043 (+10.2%) Significant improvement ARC Easy +0.093 (+20.8%) Major improvement, especially on reasoning tasks BoolQ +0.193 (+28.3%) Very significant improvement, likely due to better reasoning HellaSwag +0.057 (+8.9%) Noticeable improvement, common-sense reasoning OpenBookQA +0.020 (+4.9%) Improvement in knowledge-based QA PIQA +0.008 (+1.0%) Slight improvement, no major change Winogrande -0.030 (-4.5%) Slight decline, but not meaningful ``` Comparison Summary ```bash Metric V3-mxfp4 Thinking-bf16 Difference ARC Challenge 46.4% 42.1% +4.3 pp ARC Easy 54.1% 44.8% +9.3 pp BoolQ 87.5% 68.2% +19.3 pp HellaSwag 69.2% 63.5% +5.7 pp OpenBookQA 42.2% 40.2% +2.0 pp PIQA 77.9% 77.1% +0.8 pp Winogrande 63.9% 66.9% -3.0 pp ``` ๐Ÿ“Œ Conclusion The V3-mxfp4 model is significantly better than the base Thinking-2507-bf16 model across all key reasoning tasks: ```bash ARC Challenge is up by 4.3 percentage points. ARC Easy is up by 9.3 pp โ€” a major improvement. BoolQ shows the largest gain (+19.3 pp), indicating a major boost in logical reasoning. The only metric that shows a slight decline is Winogrande (-3 pp), but this is not meaningful. ``` ๐Ÿ’ก Key Takeaway The V3-mxfp4 model is a clear upgrade over the base Thinking model, confirming that: - The V3 series (including its mxfp4 variant) is better than the base Thinking model. - This supports the idea that V3 was designed to improve upon the base Thinking model with better reasoning and performance. This model [Qwen3-30B-A3B-YOYO-V3-mxfp4-mlx](https://huggingface.co/Qwen3-30B-A3B-YOYO-V3-mxfp4-mlx) was converted to MLX format from [YOYO-AI/Qwen3-30B-A3B-YOYO-V3](https://huggingface.co/YOYO-AI/Qwen3-30B-A3B-YOYO-V3) using mlx-lm version **0.27.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Qwen3-30B-A3B-YOYO-V3-mxfp4-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
Itzgenz2412/Deris
Itzgenz2412
2025-09-15T21:37:40Z
0
0
adapter-transformers
[ "adapter-transformers", "code", "en", "dataset:HuggingFaceM4/FineVision", "base_model:Qwen/Qwen-Image-Edit", "base_model:adapter:Qwen/Qwen-Image-Edit", "license:apache-2.0", "region:us" ]
null
2025-09-15T21:33:08Z
--- license: apache-2.0 datasets: - HuggingFaceM4/FineVision language: - en metrics: - accuracy base_model: - deepseek-ai/DeepSeek-V3.1-Base - Qwen/Qwen-Image-Edit - baidu/ERNIE-4.5-21B-A3B-Thinking new_version: Qwen/Qwen-Image-Edit library_name: adapter-transformers tags: - code ---
kanishka/opt-babylm2-rewritten-clean-spacy-earlystop_multi-adj-strict-reversed-bpe_seed-1024_1e-3
kanishka
2025-09-15T21:36:02Z
0
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "dataset:kanishka/babylm2-rewritten-clean_multi-adj-strict-reversed", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T13:29:37Z
--- library_name: transformers tags: - generated_from_trainer datasets: - kanishka/babylm2-rewritten-clean_multi-adj-strict-reversed metrics: - accuracy model-index: - name: opt-babylm2-rewritten-clean-spacy-earlystop_multi-adj-strict-reversed-bpe_seed-1024_1e-3 results: - task: name: Causal Language Modeling type: text-generation dataset: name: kanishka/babylm2-rewritten-clean_multi-adj-strict-reversed type: kanishka/babylm2-rewritten-clean_multi-adj-strict-reversed metrics: - name: Accuracy type: accuracy value: 0.47020675745799956 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-babylm2-rewritten-clean-spacy-earlystop_multi-adj-strict-reversed-bpe_seed-1024_1e-3 This model was trained from scratch on the kanishka/babylm2-rewritten-clean_multi-adj-strict-reversed dataset. It achieves the following results on the evaluation set: - Loss: 2.7262 - Accuracy: 0.4702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 1024 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.1807 | 1.0 | 2232 | 3.8849 | 0.3484 | | 3.5017 | 2.0 | 4464 | 3.3516 | 0.3997 | | 3.1731 | 3.0 | 6696 | 3.1361 | 0.4214 | | 3.0078 | 4.0 | 8928 | 3.0288 | 0.4321 | | 2.8876 | 5.0 | 11160 | 2.9675 | 0.4387 | | 2.8298 | 6.0 | 13392 | 2.9280 | 0.4430 | | 2.7879 | 7.0 | 15624 | 2.9017 | 0.4458 | | 2.7515 | 8.0 | 17856 | 2.8825 | 0.4478 | | 2.7326 | 9.0 | 20088 | 2.8690 | 0.4496 | | 2.7136 | 10.0 | 22320 | 2.8553 | 0.4509 | | 2.6971 | 11.0 | 24552 | 2.8476 | 0.4519 | | 2.6787 | 12.0 | 26784 | 2.8409 | 0.4527 | | 2.6834 | 13.0 | 29016 | 2.8380 | 0.4531 | | 2.6732 | 14.0 | 31248 | 2.8324 | 0.4538 | | 2.6593 | 15.0 | 33480 | 2.8137 | 0.4561 | | 2.6118 | 16.0 | 35712 | 2.7873 | 0.4598 | | 2.5587 | 17.0 | 37944 | 2.7642 | 0.4628 | | 2.5064 | 18.0 | 40176 | 2.7423 | 0.4661 | | 2.4397 | 19.0 | 42408 | 2.7292 | 0.4686 | | 2.3698 | 20.0 | 44640 | 2.7262 | 0.4702 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.1
OpenMed/OpenMed-ZeroShot-NER-Genomic-Medium-209M
OpenMed
2025-09-15T21:35:26Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "gene-recognition", "genetics", "genomics", "molecular-biology", "gene", "genetic_variant", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:35:04Z
--- widget: - text: "The BRCA2 gene is associated with hereditary breast cancer." - text: "Mutations in the CFTR gene cause cystic fibrosis." - text: "The APOE gene variant affects Alzheimer's disease risk." - text: "The HTT gene provides instructions for making a protein called huntingtin." - text: "Sickle cell disease is caused by a mutation in the HBB gene." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - gene-recognition - genetics - genomics - molecular-biology - gene - genetic_variant language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Genomic-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genomic-Medium-209M) **Specialized model for Gene Entity Recognition - Gene-related entities** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Targets **gene and genetics entities**, handling symbol/name variants commonly found in genomics literature.Useful for **genetic association studies**, **variant curation**, and **genomics informatics**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated GELLUS dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `Cell-line-name` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset Gellus corpus targets gene recognition and genetics entities for genomics and molecular biology applications. The Gellus corpus is a biomedical NER dataset specifically designed for gene recognition and genetics entity extraction in molecular biology literature. This corpus contains comprehensive annotations for gene names, genetic variants, and genomics-related entities that are essential for genetic research and genomics applications. The dataset supports the development of automated systems for gene mention identification, genetic association studies, and genomics text mining. It is particularly valuable for identifying genes involved in hereditary diseases, genetic disorders, and molecular genetics research. The corpus serves as a benchmark for evaluating NER models used in genetics research, personalized medicine, and genomics informatics, contributing to advances in precision medicine and genetic counseling applications. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.97` - **F1 Improvement vs Base Model**: `79.9%` ### ๐Ÿ† Top F1 Improvements on GELLUS Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Genomic-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genomic-Large-459M) | 0.5361 | 0.9775 | 0.4414 | 82.3% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Genomic-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genomic-Medium-209M) | 0.5376 | 0.9674 | 0.4298 | 79.9% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Genomic-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genomic-XLarge-770M) | 0.6875 | 0.9003 | 0.2128 | 30.9% | | 4 | [OpenMed-ZeroShot-NER-Genomic-Small-166M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genomic-Small-166M) | 0.4694 | 0.8082 | 0.3388 | 72.2% | | 5 | [OpenMed-ZeroShot-NER-Genomic-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genomic-Multi-209M) | 0.4000 | 0.7333 | 0.3333 | 83.3% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genomic-Medium-209M model_name = "OpenMed/OpenMed-ZeroShot-NER-Genomic-Medium-209M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Genomic-Medium-209M") # Example usage with default entity types text = "The BRCA2 gene is associated with hereditary breast cancer." labels = ['Cell-line-name'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "Cell-line-name", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: GELLUS - **Description**: Gene Entity Recognition - Gene-related entities ### Training Details - **Base Model**: gliner_medium-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
OpenMed/OpenMed-ZeroShot-NER-Oncology-Base-220M
OpenMed
2025-09-15T21:34:53Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "cancer-genetics", "oncology", "gene-regulation", "cancer-research", "amino_acid", "anatomical_system", "cancer", "cell", "cellular_component", "developing_anatomical_structure", "gene_or_gene_product", "immaterial_anatomical_entity", "multi-tissue_structure", "organ", "organism", "organism_subdivision", "organism_substance", "pathological_formation", "simple_chemical", "tissue", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:34:27Z
--- widget: - text: "Mutations in KRAS gene drive oncogenic transformation." - text: "The tumor suppressor p53 pathway was disrupted." - text: "EGFR amplification promotes cancer cell proliferation." - text: "Loss of function of the PTEN gene is common in many cancers." - text: "The PI3K/AKT/mTOR pathway is a critical regulator of cell growth." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - cancer-genetics - oncology - gene-regulation - cancer-research - amino_acid - anatomical_system - cancer - cell - cellular_component - developing_anatomical_structure - gene_or_gene_product - immaterial_anatomical_entity - multi-tissue_structure - organ - organism - organism_subdivision - organism_substance - pathological_formation - simple_chemical - tissue language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Oncology-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Base-220M) **Specialized model for Cancer Genetics - Cancer-related genetic entities** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Oncology-focused model for **cancer genetics**, capturing genes, variants, and cellular processes in tumor biology contexts.Useful for **cancer pathway curation**, **driver gene analysis**, and **precision oncology literature mining**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated BIONLP2013_CG dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `Amino_acid` - `Anatomical_system` - `Cancer` - `Cell` - `Cellular_component` <details> <summary>See 11 more entity types...</summary> - `Developing_anatomical_structure` - `Gene_or_gene_product` - `Immaterial_anatomical_entity` - `Multi-tissue_structure` - `Organ` - `Organism` - `Organism_subdivision` - `Organism_substance` - `Pathological_formation` - `Simple_chemical` - `Tissue` </details> **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset BioNLP 2013 CG corpus targets cancer genetics entities for oncology research and cancer genomics. The BioNLP 2013 CG (Cancer Genetics) corpus is a specialized dataset focusing on cancer genetics entities and gene regulation in oncology research. This corpus contains annotations for genes, proteins, and molecular processes specifically related to cancer biology and tumor genetics. Developed for the BioNLP Shared Task 2013, it supports the development of text mining systems for cancer research, oncological studies, and precision medicine applications. The dataset is particularly valuable for identifying cancer-related biomarkers, tumor suppressor genes, oncogenes, and therapeutic targets mentioned in cancer research literature. It serves as a benchmark for evaluating NER systems used in cancer genomics, personalized medicine, and oncology informatics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.82` - **F1 Improvement vs Base Model**: `53.4%` ### ๐Ÿ† Top F1 Improvements on BIONLP2013_CG Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Oncology-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Large-459M) | 0.5534 | 0.8990 | 0.3456 | 62.5% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Oncology-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Medium-209M) | 0.4885 | 0.8765 | 0.3880 | 79.4% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Oncology-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-XLarge-770M) | 0.5953 | 0.8750 | 0.2797 | 47.0% | | 4 | [OpenMed-ZeroShot-NER-Oncology-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Base-220M) | 0.5324 | 0.8167 | 0.2842 | 53.4% | | 5 | [OpenMed-ZeroShot-NER-Oncology-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Multi-209M) | 0.4343 | 0.7498 | 0.3154 | 72.6% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Base-220M model_name = "OpenMed/OpenMed-ZeroShot-NER-Oncology-Base-220M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Oncology-Base-220M") # Example usage with default entity types text = "Mutations in KRAS gene drive oncogenic transformation." labels = ['Amino_acid', 'Anatomical_system', 'Cancer', 'Cell', 'Cellular_component', 'Developing_anatomical_structure', 'Gene_or_gene_product', 'Immaterial_anatomical_entity', 'Multi-tissue_structure', 'Organ', 'Organism', 'Organism_subdivision', 'Organism_substance', 'Pathological_formation', 'Simple_chemical', 'Tissue'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "Amino_acid", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: BIONLP2013_CG - **Description**: Cancer Genetics - Cancer-related genetic entities ### Training Details - **Base Model**: gliner-x-base - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
svarekagerp/blockassist-bc-bellowing_reptilian_bee_1757971949
svarekagerp
2025-09-15T21:33:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing reptilian bee", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T21:33:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing reptilian bee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
OpenMed/OpenMed-ZeroShot-NER-DNA-XLarge-770M
OpenMed
2025-09-15T21:33:33Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "protein-recognition", "gene-recognition", "molecular-biology", "genomics", "protein", "dna", "rna", "cell_line", "cell_type", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:32:49Z
--- widget: - text: "The p53 protein plays a crucial role in tumor suppression." - text: "Expression of BRCA1 gene was significantly upregulated in breast tissue." - text: "The NF-kB pathway regulates inflammatory responses." - text: "Activation of the STAT3 signaling pathway is observed in many cancers." - text: "The experiment involved transfecting HeLa cells with a plasmid containing the target gene." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - protein-recognition - gene-recognition - molecular-biology - genomics - protein - dna - rna - cell_line - cell_type language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-DNA-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-XLarge-770M) **Specialized model for Biomedical Entity Recognition - Proteins, DNA, RNA, cell lines, and cell types** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Targets **molecular biology entities**: proteins, DNA/RNA, cell lines, and cell types in biomedical research content.Great for **pathway curation**, **molecular interaction mining**, and **omics-aware information extraction**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated JNLPBA dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `DNA` - `RNA` - `cell_line` - `cell_tyle` - `protein` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset JNLPBA corpus focuses on biomedical named entity recognition for protein, DNA, RNA, cell line, and cell type entities. The JNLPBA (Joint Workshop on Natural Language Processing in Biomedicine and its Applications) corpus is a widely-used biomedical NER dataset derived from the GENIA corpus for the 2004 bio-entity recognition task. It contains annotations for five entity types: protein, DNA, RNA, cell line, and cell type, making it essential for molecular biology and genomics research applications. The corpus consists of MEDLINE abstracts annotated with biomedical entities relevant to gene and protein recognition tasks. It has been extensively used as a benchmark for evaluating biomedical NER systems and continues to be a standard evaluation dataset for developing machine learning models in computational biology and bioinformatics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.81` - **F1 Improvement vs Base Model**: `17.3%` ### ๐Ÿ† Top F1 Improvements on JNLPBA Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-DNA-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Large-459M) | 0.7006 | 0.8220 | 0.1214 | 17.3% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-DNA-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Medium-209M) | 0.6928 | 0.8208 | 0.1280 | 18.5% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-DNA-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-XLarge-770M) | 0.5271 | 0.8106 | 0.2835 | 53.8% | | 4 | [OpenMed-ZeroShot-NER-DNA-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Base-220M) | 0.4896 | 0.7907 | 0.3011 | 61.5% | | 5 | [OpenMed-ZeroShot-NER-DNA-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Multi-209M) | 0.6660 | 0.7750 | 0.1090 | 16.4% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-XLarge-770M model_name = "OpenMed/OpenMed-ZeroShot-NER-DNA-XLarge-770M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-DNA-XLarge-770M") # Example usage with default entity types text = "The p53 protein plays a crucial role in tumor suppression." labels = ['DNA', 'RNA', 'cell_line', 'cell_tyle', 'protein'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "DNA", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: JNLPBA - **Description**: Biomedical Entity Recognition - Proteins, DNA, RNA, cell lines, and cell types ### Training Details - **Base Model**: gliner-x-large - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
Aasdfip/greedy_po_filtered_1
Aasdfip
2025-09-15T21:31:42Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-09-15T21:29:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Adipiz99/LAVA-Framework
Adipiz99
2025-09-15T21:30:31Z
0
0
lava-framework
[ "lava-framework", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "arxiv:2508.02521", "region:us" ]
null
2025-09-15T19:27:15Z
--- library_name: lava-framework tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: https://github.com/adipiz99/lava-framework - Paper: https://arxiv.org/abs/2508.02521 - Docs: [More Information Needed]
OpenMed/OpenMed-ZeroShot-NER-Pharma-Base-220M
OpenMed
2025-09-15T21:29:21Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "chemical-entity-recognition", "drug-discovery", "pharmacology", "biocuration", "chemical", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:28:54Z
--- widget: - text: "Administration of metformin reduced glucose levels significantly." - text: "The study evaluated the efficacy of cisplatin in cancer treatment." - text: "Patients received ibuprofen for inflammation management." - text: "The patient's medication was switched to tamoxifen to prevent breast cancer recurrence." - text: "Lithium carbonate is often prescribed for the management of bipolar disorder." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - chemical-entity-recognition - drug-discovery - pharmacology - biocuration - chemical language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Pharma-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Base-220M) **Specialized model for Chemical Entity Recognition - Chemical entities from the BC5CDR dataset** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Focused on **chemical mentions** in the BC5CDR domain, capturing pharmaceutical compounds and therapeutic agents in context with diseases.Enables **pharmacovigilance**, **adverse event mining**, and **chemicalโ€“disease relation pipelines** when paired with downstream relation extraction. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated BC5CDR_CHEM dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `CHE` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset BC5CDR-Chem focuses on chemical entity recognition from the BioCreative V Chemical-Disease Relation extraction task. The BC5CDR-Chem corpus is part of the BioCreative V Chemical-Disease Relation (CDR) extraction challenge, specifically targeting chemical entity recognition in biomedical texts. This dataset contains 1,500 PubMed abstracts with 4,409 annotated chemical entities, designed to support automated drug discovery and pharmacovigilance applications. The corpus emphasizes chemical compounds, drugs, and therapeutic substances that are relevant for understanding chemical-disease relationships. It serves as a critical resource for developing NER systems that can identify chemical entities for downstream tasks like adverse drug reaction detection and drug repurposing research. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.92` - **F1 Improvement vs Base Model**: `40.3%` ### ๐Ÿ† Top F1 Improvements on BC5CDR_CHEM Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Pharma-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Large-459M) | 0.7537 | 0.9542 | 0.2005 | 26.6% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Pharma-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-XLarge-770M) | 0.7299 | 0.9463 | 0.2164 | 29.7% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Pharma-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Medium-209M) | 0.6358 | 0.9457 | 0.3100 | 48.8% | | 4 | [OpenMed-ZeroShot-NER-Pharma-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Base-220M) | 0.6554 | 0.9197 | 0.2643 | 40.3% | | 5 | [OpenMed-ZeroShot-NER-Pharma-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Multi-209M) | 0.6548 | 0.8931 | 0.2383 | 36.4% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Base-220M model_name = "OpenMed/OpenMed-ZeroShot-NER-Pharma-Base-220M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Pharma-Base-220M") # Example usage with default entity types text = "Administration of metformin reduced glucose levels significantly." labels = ['CHE'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "CHE", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: BC5CDR_CHEM - **Description**: Chemical Entity Recognition - Chemical entities from the BC5CDR dataset ### Training Details - **Base Model**: gliner-x-base - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
OpenMed/OpenMed-ZeroShot-NER-Protein-Large-459M
OpenMed
2025-09-15T21:28:40Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "protein-interactions", "molecular-biology", "biochemistry", "systems-biology", "protein", "protein_complex", "protein_family", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:28:10Z
--- widget: - text: "The Maillard reaction is responsible for the browning of many foods." - text: "Casein micelles are the primary protein component of milk." - text: "Starch gelatinization is a key process in cooking pasta and rice." - text: "Polyphenols in green tea have antioxidant properties." - text: "Omega-3 fatty acids are essential fats found in fish oil." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - protein-interactions - molecular-biology - biochemistry - systems-biology - protein - protein_complex - protein_family language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Protein-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Large-459M) **Specialized model for Biomedical Entity Recognition - Various biomedical entities** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Focuses on **protein entities** (families, complexes, variants) and related molecular biology terms.Applicable to **proteinโ€“protein interaction mining**, **pathway modeling**, and **systems biology**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated FSU dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `protein` - `protein_complex` - `protein_enum` - `protein_family_or_group` - `protein_variant` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset FSU corpus focuses on protein interactions and molecular biology entities for systems biology research. The FSU (Florida State University) corpus is a biomedical NER dataset designed for protein interaction recognition and molecular biology entity extraction. This corpus contains annotations for proteins, protein complexes, protein families, protein variants, and molecular interaction entities relevant to systems biology and biochemistry research. The dataset supports the development of text mining systems for protein-protein interaction extraction, molecular pathway analysis, and systems biology applications. It is particularly valuable for identifying protein entities involved in cellular processes, signal transduction pathways, and molecular mechanisms. The corpus serves as a benchmark for evaluating NER systems used in proteomics research, drug discovery, and molecular biology informatics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.92` - **F1 Improvement vs Base Model**: `63.9%` ### ๐Ÿ† Top F1 Improvements on FSU Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Protein-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Large-459M) | 0.5612 | 0.9200 | 0.3589 | 63.9% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Protein-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Medium-209M) | 0.5631 | 0.8995 | 0.3364 | 59.7% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Protein-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-XLarge-770M) | 0.5659 | 0.8786 | 0.3127 | 55.3% | | 4 | [OpenMed-ZeroShot-NER-Protein-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Base-220M) | 0.5230 | 0.8454 | 0.3224 | 61.6% | | 5 | [OpenMed-ZeroShot-NER-Protein-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Multi-209M) | 0.5441 | 0.7810 | 0.2369 | 43.5% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Large-459M model_name = "OpenMed/OpenMed-ZeroShot-NER-Protein-Large-459M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Protein-Large-459M") # Example usage with default entity types text = "The Maillard reaction is responsible for the browning of many foods." labels = ['protein', 'protein_complex', 'protein_enum', 'protein_family_or_group', 'protein_variant'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "protein", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: FSU - **Description**: Biomedical Entity Recognition - Various biomedical entities ### Training Details - **Base Model**: gliner_large-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
jq/qwen3-14b-sunflower-20250915
jq
2025-09-15T21:27:44Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:jq/sunflower-14b-bs64-lr1e-4", "base_model:finetune:jq/sunflower-14b-bs64-lr1e-4", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-15T21:27:34Z
--- base_model: jq/sunflower-14b-bs64-lr1e-4 tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** jq - **License:** apache-2.0 - **Finetuned from model :** jq/sunflower-14b-bs64-lr1e-4 This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
OpenMed/OpenMed-ZeroShot-NER-Disease-Base-220M
OpenMed
2025-09-15T21:27:20Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "disease-entity-recognition", "medical-diagnosis", "pathology", "biocuration", "disease", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:26:58Z
--- widget: - text: "The patient was diagnosed with diabetes mellitus type 2." - text: "Symptoms of Alzheimer's disease became apparent over several months." - text: "Treatment for hypertension was initiated immediately." - text: "A possible link between Crohn's disease and gut microbiota is being investigated." - text: "The patient has a family history of cystic fibrosis." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - disease-entity-recognition - medical-diagnosis - pathology - biocuration - disease language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Disease-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Base-220M) **Specialized model for Disease Entity Recognition - Disease entities from the BC5CDR dataset** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Specialized for **disease and condition recognition** from biomedical texts, covering clinical disorders and pathological states.Supports **patient phenotyping**, **disease indexing**, **literature triage**, and **clinical evidence aggregation**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated BC5CDR_DISEASE dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `DISEASE` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset BC5CDR-Disease targets disease entity recognition from the BioCreative V Chemical-Disease Relation extraction corpus. The BC5CDR-Disease corpus is the disease-focused component of the BioCreative V Chemical-Disease Relation (CDR) task, containing 1,500 PubMed abstracts with 5,818 annotated disease entities. This manually curated dataset is designed to advance automated disease name recognition for medical diagnosis, pathology research, and clinical decision support systems. The corpus includes annotations for various disease types, medical conditions, and pathological states mentioned in biomedical literature. It serves as a benchmark for evaluating NER models in clinical and biomedical applications where accurate disease entity identification is crucial for medical informatics and healthcare analytics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.83` - **F1 Improvement vs Base Model**: `39.3%` ### ๐Ÿ† Top F1 Improvements on BC5CDR_DISEASE Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Disease-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Large-459M) | 0.5890 | 0.9029 | 0.3138 | 53.3% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Disease-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Medium-209M) | 0.5721 | 0.8848 | 0.3127 | 54.7% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Disease-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-XLarge-770M) | 0.6969 | 0.8593 | 0.1624 | 23.3% | | 4 | [OpenMed-ZeroShot-NER-Disease-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Base-220M) | 0.5952 | 0.8293 | 0.2341 | 39.3% | | 5 | [OpenMed-ZeroShot-NER-Disease-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Multi-209M) | 0.5323 | 0.7969 | 0.2645 | 49.7% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Base-220M model_name = "OpenMed/OpenMed-ZeroShot-NER-Disease-Base-220M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Disease-Base-220M") # Example usage with default entity types text = "The patient was diagnosed with diabetes mellitus type 2." labels = ['DISEASE'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "DISEASE", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: BC5CDR_DISEASE - **Description**: Disease Entity Recognition - Disease entities from the BC5CDR dataset ### Training Details - **Base Model**: gliner-x-base - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
maximedb/DeepSeek-R1-Distill-Llama-8B-twentle
maximedb
2025-09-15T21:27:16Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "endpoints_compatible", "region:us" ]
null
2025-09-15T21:27:10Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B library_name: transformers model_name: DeepSeek-R1-Distill-Llama-8B-twentle tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for DeepSeek-R1-Distill-Llama-8B-twentle This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="maximedb/DeepSeek-R1-Distill-Llama-8B-twentle", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.4.1+cu124 - Datasets: 4.1.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BootesVoid/cmfiozh96068qx0n0130uxpks_cmfll7ytf08b8x0n095fw3uh9
BootesVoid
2025-09-15T21:26:25Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-15T21:26:23Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ALYSSA427 --- # Cmfiozh96068Qx0N0130Uxpks_Cmfll7Ytf08B8X0N095Fw3Uh9 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ALYSSA427` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ALYSSA427", "lora_weights": "https://huggingface.co/BootesVoid/cmfiozh96068qx0n0130uxpks_cmfll7ytf08b8x0n095fw3uh9/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmfiozh96068qx0n0130uxpks_cmfll7ytf08b8x0n095fw3uh9', weight_name='lora.safetensors') image = pipeline('ALYSSA427').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmfiozh96068qx0n0130uxpks_cmfll7ytf08b8x0n095fw3uh9/discussions) to add images that show off what youโ€™ve made with this LoRA.
OpenMed/OpenMed-ZeroShot-NER-DNA-Small-166M
OpenMed
2025-09-15T21:26:13Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "protein-recognition", "gene-recognition", "molecular-biology", "genomics", "protein", "dna", "rna", "cell_line", "cell_type", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:25:57Z
--- widget: - text: "The p53 protein plays a crucial role in tumor suppression." - text: "Expression of BRCA1 gene was significantly upregulated in breast tissue." - text: "The NF-kB pathway regulates inflammatory responses." - text: "Activation of the STAT3 signaling pathway is observed in many cancers." - text: "The experiment involved transfecting HeLa cells with a plasmid containing the target gene." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - protein-recognition - gene-recognition - molecular-biology - genomics - protein - dna - rna - cell_line - cell_type language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-DNA-Small-166M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Small-166M) **Specialized model for Biomedical Entity Recognition - Proteins, DNA, RNA, cell lines, and cell types** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Targets **molecular biology entities**: proteins, DNA/RNA, cell lines, and cell types in biomedical research content.Great for **pathway curation**, **molecular interaction mining**, and **omics-aware information extraction**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated JNLPBA dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `DNA` - `RNA` - `cell_line` - `cell_tyle` - `protein` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset JNLPBA corpus focuses on biomedical named entity recognition for protein, DNA, RNA, cell line, and cell type entities. The JNLPBA (Joint Workshop on Natural Language Processing in Biomedicine and its Applications) corpus is a widely-used biomedical NER dataset derived from the GENIA corpus for the 2004 bio-entity recognition task. It contains annotations for five entity types: protein, DNA, RNA, cell line, and cell type, making it essential for molecular biology and genomics research applications. The corpus consists of MEDLINE abstracts annotated with biomedical entities relevant to gene and protein recognition tasks. It has been extensively used as a benchmark for evaluating biomedical NER systems and continues to be a standard evaluation dataset for developing machine learning models in computational biology and bioinformatics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.76` - **F1 Improvement vs Base Model**: `20.5%` ### ๐Ÿ† Top F1 Improvements on JNLPBA Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-DNA-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Large-459M) | 0.7006 | 0.8220 | 0.1214 | 17.3% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-DNA-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Medium-209M) | 0.6928 | 0.8208 | 0.1280 | 18.5% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-DNA-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-XLarge-770M) | 0.5271 | 0.8106 | 0.2835 | 53.8% | | 4 | [OpenMed-ZeroShot-NER-DNA-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Base-220M) | 0.4896 | 0.7907 | 0.3011 | 61.5% | | 5 | [OpenMed-ZeroShot-NER-DNA-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Multi-209M) | 0.6660 | 0.7750 | 0.1090 | 16.4% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Small-166M model_name = "OpenMed/OpenMed-ZeroShot-NER-DNA-Small-166M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-DNA-Small-166M") # Example usage with default entity types text = "The p53 protein plays a crucial role in tumor suppression." labels = ['DNA', 'RNA', 'cell_line', 'cell_tyle', 'protein'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "DNA", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: JNLPBA - **Description**: Biomedical Entity Recognition - Proteins, DNA, RNA, cell lines, and cell types ### Training Details - **Base Model**: gliner_small-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
ChenWu98/numina_qwen_2.5_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_122
ChenWu98
2025-09-15T21:25:50Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:ChenWu98/numina_qwen_2.5_sft_teachers_no_reasoning_source_condition_2048", "base_model:finetune:ChenWu98/numina_qwen_2.5_sft_teachers_no_reasoning_source_condition_2048", "endpoints_compatible", "region:us" ]
null
2025-09-15T05:44:52Z
--- base_model: ChenWu98/numina_qwen_2.5_sft_teachers_no_reasoning_source_condition_2048 library_name: transformers model_name: numina_qwen_2.5_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_122 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for numina_qwen_2.5_sft_teachers_no_reasoning_source_anneal_condition_split_1_from_122 This model is a fine-tuned version of [ChenWu98/numina_qwen_2.5_sft_teachers_no_reasoning_source_condition_2048](https://huggingface.co/ChenWu98/numina_qwen_2.5_sft_teachers_no_reasoning_source_condition_2048). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/rej6czrd) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
OpenMed/OpenMed-ZeroShot-NER-Pharma-Multi-209M
OpenMed
2025-09-15T21:25:46Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "chemical-entity-recognition", "drug-discovery", "pharmacology", "biocuration", "chemical", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:25:26Z
--- widget: - text: "Administration of metformin reduced glucose levels significantly." - text: "The study evaluated the efficacy of cisplatin in cancer treatment." - text: "Patients received ibuprofen for inflammation management." - text: "The patient's medication was switched to tamoxifen to prevent breast cancer recurrence." - text: "Lithium carbonate is often prescribed for the management of bipolar disorder." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - chemical-entity-recognition - drug-discovery - pharmacology - biocuration - chemical language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Pharma-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Multi-209M) **Specialized model for Chemical Entity Recognition - Chemical entities from the BC5CDR dataset** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Focused on **chemical mentions** in the BC5CDR domain, capturing pharmaceutical compounds and therapeutic agents in context with diseases.Enables **pharmacovigilance**, **adverse event mining**, and **chemicalโ€“disease relation pipelines** when paired with downstream relation extraction. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated BC5CDR_CHEM dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `CHE` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset BC5CDR-Chem focuses on chemical entity recognition from the BioCreative V Chemical-Disease Relation extraction task. The BC5CDR-Chem corpus is part of the BioCreative V Chemical-Disease Relation (CDR) extraction challenge, specifically targeting chemical entity recognition in biomedical texts. This dataset contains 1,500 PubMed abstracts with 4,409 annotated chemical entities, designed to support automated drug discovery and pharmacovigilance applications. The corpus emphasizes chemical compounds, drugs, and therapeutic substances that are relevant for understanding chemical-disease relationships. It serves as a critical resource for developing NER systems that can identify chemical entities for downstream tasks like adverse drug reaction detection and drug repurposing research. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.89` - **F1 Improvement vs Base Model**: `36.4%` ### ๐Ÿ† Top F1 Improvements on BC5CDR_CHEM Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Pharma-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Large-459M) | 0.7537 | 0.9542 | 0.2005 | 26.6% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Pharma-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-XLarge-770M) | 0.7299 | 0.9463 | 0.2164 | 29.7% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Pharma-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Medium-209M) | 0.6358 | 0.9457 | 0.3100 | 48.8% | | 4 | [OpenMed-ZeroShot-NER-Pharma-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Base-220M) | 0.6554 | 0.9197 | 0.2643 | 40.3% | | 5 | [OpenMed-ZeroShot-NER-Pharma-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Multi-209M) | 0.6548 | 0.8931 | 0.2383 | 36.4% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Pharma-Multi-209M model_name = "OpenMed/OpenMed-ZeroShot-NER-Pharma-Multi-209M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Pharma-Multi-209M") # Example usage with default entity types text = "Administration of metformin reduced glucose levels significantly." labels = ['CHE'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "CHE", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: BC5CDR_CHEM - **Description**: Chemical Entity Recognition - Chemical entities from the BC5CDR dataset ### Training Details - **Base Model**: gliner_multi-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
OpenMed/OpenMed-ZeroShot-NER-Oncology-Multi-209M
OpenMed
2025-09-15T21:25:15Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "cancer-genetics", "oncology", "gene-regulation", "cancer-research", "amino_acid", "anatomical_system", "cancer", "cell", "cellular_component", "developing_anatomical_structure", "gene_or_gene_product", "immaterial_anatomical_entity", "multi-tissue_structure", "organ", "organism", "organism_subdivision", "organism_substance", "pathological_formation", "simple_chemical", "tissue", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:24:56Z
--- widget: - text: "Mutations in KRAS gene drive oncogenic transformation." - text: "The tumor suppressor p53 pathway was disrupted." - text: "EGFR amplification promotes cancer cell proliferation." - text: "Loss of function of the PTEN gene is common in many cancers." - text: "The PI3K/AKT/mTOR pathway is a critical regulator of cell growth." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - cancer-genetics - oncology - gene-regulation - cancer-research - amino_acid - anatomical_system - cancer - cell - cellular_component - developing_anatomical_structure - gene_or_gene_product - immaterial_anatomical_entity - multi-tissue_structure - organ - organism - organism_subdivision - organism_substance - pathological_formation - simple_chemical - tissue language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Oncology-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Multi-209M) **Specialized model for Cancer Genetics - Cancer-related genetic entities** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Oncology-focused model for **cancer genetics**, capturing genes, variants, and cellular processes in tumor biology contexts.Useful for **cancer pathway curation**, **driver gene analysis**, and **precision oncology literature mining**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated BIONLP2013_CG dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `Amino_acid` - `Anatomical_system` - `Cancer` - `Cell` - `Cellular_component` <details> <summary>See 11 more entity types...</summary> - `Developing_anatomical_structure` - `Gene_or_gene_product` - `Immaterial_anatomical_entity` - `Multi-tissue_structure` - `Organ` - `Organism` - `Organism_subdivision` - `Organism_substance` - `Pathological_formation` - `Simple_chemical` - `Tissue` </details> **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset BioNLP 2013 CG corpus targets cancer genetics entities for oncology research and cancer genomics. The BioNLP 2013 CG (Cancer Genetics) corpus is a specialized dataset focusing on cancer genetics entities and gene regulation in oncology research. This corpus contains annotations for genes, proteins, and molecular processes specifically related to cancer biology and tumor genetics. Developed for the BioNLP Shared Task 2013, it supports the development of text mining systems for cancer research, oncological studies, and precision medicine applications. The dataset is particularly valuable for identifying cancer-related biomarkers, tumor suppressor genes, oncogenes, and therapeutic targets mentioned in cancer research literature. It serves as a benchmark for evaluating NER systems used in cancer genomics, personalized medicine, and oncology informatics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.75` - **F1 Improvement vs Base Model**: `72.6%` ### ๐Ÿ† Top F1 Improvements on BIONLP2013_CG Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Oncology-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Large-459M) | 0.5534 | 0.8990 | 0.3456 | 62.5% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Oncology-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Medium-209M) | 0.4885 | 0.8765 | 0.3880 | 79.4% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Oncology-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-XLarge-770M) | 0.5953 | 0.8750 | 0.2797 | 47.0% | | 4 | [OpenMed-ZeroShot-NER-Oncology-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Base-220M) | 0.5324 | 0.8167 | 0.2842 | 53.4% | | 5 | [OpenMed-ZeroShot-NER-Oncology-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Multi-209M) | 0.4343 | 0.7498 | 0.3154 | 72.6% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Multi-209M model_name = "OpenMed/OpenMed-ZeroShot-NER-Oncology-Multi-209M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Oncology-Multi-209M") # Example usage with default entity types text = "Mutations in KRAS gene drive oncogenic transformation." labels = ['Amino_acid', 'Anatomical_system', 'Cancer', 'Cell', 'Cellular_component', 'Developing_anatomical_structure', 'Gene_or_gene_product', 'Immaterial_anatomical_entity', 'Multi-tissue_structure', 'Organ', 'Organism', 'Organism_subdivision', 'Organism_substance', 'Pathological_formation', 'Simple_chemical', 'Tissue'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "Amino_acid", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: BIONLP2013_CG - **Description**: Cancer Genetics - Cancer-related genetic entities ### Training Details - **Base Model**: gliner_multi-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
leonMW/DeepSeek-R1-Distill-Qwen-1.5B-C
leonMW
2025-09-15T21:24:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "grpo", "trl", "conversational", "arxiv:2402.03300", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T18:46:49Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B library_name: transformers model_name: DeepSeek-R1-Distill-Qwen-1.5B-C tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for DeepSeek-R1-Distill-Qwen-1.5B-C This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="leonMW/DeepSeek-R1-Distill-Qwen-1.5B-C", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/leonwenderoth-tu-darmstadt/huggingface/runs/87brhp9s) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
OpenMed/OpenMed-ZeroShot-NER-Protein-Base-220M
OpenMed
2025-09-15T21:24:46Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "protein-interactions", "molecular-biology", "biochemistry", "systems-biology", "protein", "protein_complex", "protein_family", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:24:25Z
--- widget: - text: "The Maillard reaction is responsible for the browning of many foods." - text: "Casein micelles are the primary protein component of milk." - text: "Starch gelatinization is a key process in cooking pasta and rice." - text: "Polyphenols in green tea have antioxidant properties." - text: "Omega-3 fatty acids are essential fats found in fish oil." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - protein-interactions - molecular-biology - biochemistry - systems-biology - protein - protein_complex - protein_family language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Protein-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Base-220M) **Specialized model for Biomedical Entity Recognition - Various biomedical entities** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Focuses on **protein entities** (families, complexes, variants) and related molecular biology terms.Applicable to **proteinโ€“protein interaction mining**, **pathway modeling**, and **systems biology**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated FSU dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `protein` - `protein_complex` - `protein_enum` - `protein_family_or_group` - `protein_variant` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset FSU corpus focuses on protein interactions and molecular biology entities for systems biology research. The FSU (Florida State University) corpus is a biomedical NER dataset designed for protein interaction recognition and molecular biology entity extraction. This corpus contains annotations for proteins, protein complexes, protein families, protein variants, and molecular interaction entities relevant to systems biology and biochemistry research. The dataset supports the development of text mining systems for protein-protein interaction extraction, molecular pathway analysis, and systems biology applications. It is particularly valuable for identifying protein entities involved in cellular processes, signal transduction pathways, and molecular mechanisms. The corpus serves as a benchmark for evaluating NER systems used in proteomics research, drug discovery, and molecular biology informatics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.85` - **F1 Improvement vs Base Model**: `61.6%` ### ๐Ÿ† Top F1 Improvements on FSU Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Protein-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Large-459M) | 0.5612 | 0.9200 | 0.3589 | 63.9% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Protein-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Medium-209M) | 0.5631 | 0.8995 | 0.3364 | 59.7% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Protein-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-XLarge-770M) | 0.5659 | 0.8786 | 0.3127 | 55.3% | | 4 | [OpenMed-ZeroShot-NER-Protein-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Base-220M) | 0.5230 | 0.8454 | 0.3224 | 61.6% | | 5 | [OpenMed-ZeroShot-NER-Protein-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Multi-209M) | 0.5441 | 0.7810 | 0.2369 | 43.5% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Base-220M model_name = "OpenMed/OpenMed-ZeroShot-NER-Protein-Base-220M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Protein-Base-220M") # Example usage with default entity types text = "The Maillard reaction is responsible for the browning of many foods." labels = ['protein', 'protein_complex', 'protein_enum', 'protein_family_or_group', 'protein_variant'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "protein", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: FSU - **Description**: Biomedical Entity Recognition - Various biomedical entities ### Training Details - **Base Model**: gliner-x-base - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
OpenMed/OpenMed-ZeroShot-NER-DNA-Tiny-60M
OpenMed
2025-09-15T21:24:14Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "protein-recognition", "gene-recognition", "molecular-biology", "genomics", "protein", "dna", "rna", "cell_line", "cell_type", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:24:00Z
--- widget: - text: "The p53 protein plays a crucial role in tumor suppression." - text: "Expression of BRCA1 gene was significantly upregulated in breast tissue." - text: "The NF-kB pathway regulates inflammatory responses." - text: "Activation of the STAT3 signaling pathway is observed in many cancers." - text: "The experiment involved transfecting HeLa cells with a plasmid containing the target gene." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - protein-recognition - gene-recognition - molecular-biology - genomics - protein - dna - rna - cell_line - cell_type language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-DNA-Tiny-60M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Tiny-60M) **Specialized model for Biomedical Entity Recognition - Proteins, DNA, RNA, cell lines, and cell types** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Targets **molecular biology entities**: proteins, DNA/RNA, cell lines, and cell types in biomedical research content.Great for **pathway curation**, **molecular interaction mining**, and **omics-aware information extraction**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated JNLPBA dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `DNA` - `RNA` - `cell_line` - `cell_tyle` - `protein` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset JNLPBA corpus focuses on biomedical named entity recognition for protein, DNA, RNA, cell line, and cell type entities. The JNLPBA (Joint Workshop on Natural Language Processing in Biomedicine and its Applications) corpus is a widely-used biomedical NER dataset derived from the GENIA corpus for the 2004 bio-entity recognition task. It contains annotations for five entity types: protein, DNA, RNA, cell line, and cell type, making it essential for molecular biology and genomics research applications. The corpus consists of MEDLINE abstracts annotated with biomedical entities relevant to gene and protein recognition tasks. It has been extensively used as a benchmark for evaluating biomedical NER systems and continues to be a standard evaluation dataset for developing machine learning models in computational biology and bioinformatics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.63` - **F1 Improvement vs Base Model**: `20.5%` ### ๐Ÿ† Top F1 Improvements on JNLPBA Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-DNA-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Large-459M) | 0.7006 | 0.8220 | 0.1214 | 17.3% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-DNA-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Medium-209M) | 0.6928 | 0.8208 | 0.1280 | 18.5% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-DNA-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-XLarge-770M) | 0.5271 | 0.8106 | 0.2835 | 53.8% | | 4 | [OpenMed-ZeroShot-NER-DNA-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Base-220M) | 0.4896 | 0.7907 | 0.3011 | 61.5% | | 5 | [OpenMed-ZeroShot-NER-DNA-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Multi-209M) | 0.6660 | 0.7750 | 0.1090 | 16.4% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-DNA-Tiny-60M model_name = "OpenMed/OpenMed-ZeroShot-NER-DNA-Tiny-60M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-DNA-Tiny-60M") # Example usage with default entity types text = "The p53 protein plays a crucial role in tumor suppression." labels = ['DNA', 'RNA', 'cell_line', 'cell_tyle', 'protein'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "DNA", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: JNLPBA - **Description**: Biomedical Entity Recognition - Proteins, DNA, RNA, cell lines, and cell types ### Training Details - **Base Model**: gliner-x-small - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
OpenMed/OpenMed-ZeroShot-NER-Disease-Tiny-60M
OpenMed
2025-09-15T21:20:36Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "disease-entity-recognition", "medical-diagnosis", "pathology", "biocuration", "disease", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:20:20Z
--- widget: - text: "The patient was diagnosed with diabetes mellitus type 2." - text: "Symptoms of Alzheimer's disease became apparent over several months." - text: "Treatment for hypertension was initiated immediately." - text: "A possible link between Crohn's disease and gut microbiota is being investigated." - text: "The patient has a family history of cystic fibrosis." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - disease-entity-recognition - medical-diagnosis - pathology - biocuration - disease language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Disease-Tiny-60M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Tiny-60M) **Specialized model for Disease Entity Recognition - Disease entities from the BC5CDR dataset** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Specialized for **disease and condition recognition** from biomedical texts, covering clinical disorders and pathological states.Supports **patient phenotyping**, **disease indexing**, **literature triage**, and **clinical evidence aggregation**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated BC5CDR_DISEASE dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `DISEASE` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset BC5CDR-Disease targets disease entity recognition from the BioCreative V Chemical-Disease Relation extraction corpus. The BC5CDR-Disease corpus is the disease-focused component of the BioCreative V Chemical-Disease Relation (CDR) task, containing 1,500 PubMed abstracts with 5,818 annotated disease entities. This manually curated dataset is designed to advance automated disease name recognition for medical diagnosis, pathology research, and clinical decision support systems. The corpus includes annotations for various disease types, medical conditions, and pathological states mentioned in biomedical literature. It serves as a benchmark for evaluating NER models in clinical and biomedical applications where accurate disease entity identification is crucial for medical informatics and healthcare analytics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.67` - **F1 Improvement vs Base Model**: `32.7%` ### ๐Ÿ† Top F1 Improvements on BC5CDR_DISEASE Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Disease-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Large-459M) | 0.5890 | 0.9029 | 0.3138 | 53.3% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Disease-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Medium-209M) | 0.5721 | 0.8848 | 0.3127 | 54.7% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Disease-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-XLarge-770M) | 0.6969 | 0.8593 | 0.1624 | 23.3% | | 4 | [OpenMed-ZeroShot-NER-Disease-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Base-220M) | 0.5952 | 0.8293 | 0.2341 | 39.3% | | 5 | [OpenMed-ZeroShot-NER-Disease-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Multi-209M) | 0.5323 | 0.7969 | 0.2645 | 49.7% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Disease-Tiny-60M model_name = "OpenMed/OpenMed-ZeroShot-NER-Disease-Tiny-60M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Disease-Tiny-60M") # Example usage with default entity types text = "The patient was diagnosed with diabetes mellitus type 2." labels = ['DISEASE'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "DISEASE", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: BC5CDR_DISEASE - **Description**: Disease Entity Recognition - Disease entities from the BC5CDR dataset ### Training Details - **Base Model**: gliner-x-small - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
OpenMed/OpenMed-ZeroShot-NER-Genome-Tiny-60M
OpenMed
2025-09-15T21:19:39Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "gene-recognition", "protein-recognition", "genomics", "molecular-biology", "gene", "protein", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:19:26Z
--- widget: - text: "The EGFR gene mutation was identified in lung cancer patients." - text: "Overexpression of HER2 protein correlates with poor prognosis." - text: "The TP53 gene encodes a tumor suppressor protein." - text: "The BRAF V600E mutation is a common driver in melanoma." - text: "Insulin receptor signaling is essential for glucose homeostasis." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - gene-recognition - protein-recognition - genomics - molecular-biology - gene - protein language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Genome-Tiny-60M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genome-Tiny-60M) **Specialized model for Gene/Protein Entity Recognition - Gene and protein mentions** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Accurate **gene/protein mention recognition**, including synonyms and symbol variants from biomedical literature.Enables **gene-centric curation**, **variant/association mining**, and **network construction**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated BC2GM dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `GENE/PROTEIN` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset BC2GM corpus targets gene and protein mention recognition from the BioCreative II Gene Mention task. The BC2GM (BioCreative II Gene Mention) corpus is a foundational dataset for gene and protein name recognition in biomedical literature, created for the BioCreative II challenge. This corpus contains thousands of sentences from MEDLINE abstracts with manually annotated gene and protein mentions, serving as a critical benchmark for genomics and molecular biology NER systems. The dataset addresses the challenging task of identifying gene names, which often have complex nomenclature and ambiguous boundaries. It has been instrumental in advancing automated gene recognition systems used in functional genomics research, gene expression analysis, and molecular biology text mining. The corpus continues to be widely used for training and evaluating biomedical NER models. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.61` - **F1 Improvement vs Base Model**: `26.2%` ### ๐Ÿ† Top F1 Improvements on BC2GM Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Genome-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genome-Large-459M) | 0.5538 | 0.8616 | 0.3078 | 55.6% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Genome-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genome-Medium-209M) | 0.5893 | 0.8553 | 0.2660 | 45.1% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Genome-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genome-XLarge-770M) | 0.5572 | 0.8367 | 0.2795 | 50.2% | | 4 | [OpenMed-ZeroShot-NER-Genome-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genome-Base-220M) | 0.5322 | 0.7986 | 0.2664 | 50.1% | | 5 | [OpenMed-ZeroShot-NER-Genome-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genome-Multi-209M) | 0.5919 | 0.7494 | 0.1576 | 26.6% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genome-Tiny-60M model_name = "OpenMed/OpenMed-ZeroShot-NER-Genome-Tiny-60M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Genome-Tiny-60M") # Example usage with default entity types text = "The EGFR gene mutation was identified in lung cancer patients." labels = ['GENE/PROTEIN'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "GENE/PROTEIN", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: BC2GM - **Description**: Gene/Protein Entity Recognition - Gene and protein mentions ### Training Details - **Base Model**: gliner-x-small - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
hskdbjvug/blockassist
hskdbjvug
2025-09-15T21:19:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "alert dormant yak", "arxiv:2504.07091", "region:us" ]
null
2025-09-15T20:43:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - alert dormant yak --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
OpenMed/OpenMed-ZeroShot-NER-Oncology-Small-166M
OpenMed
2025-09-15T21:19:11Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "cancer-genetics", "oncology", "gene-regulation", "cancer-research", "amino_acid", "anatomical_system", "cancer", "cell", "cellular_component", "developing_anatomical_structure", "gene_or_gene_product", "immaterial_anatomical_entity", "multi-tissue_structure", "organ", "organism", "organism_subdivision", "organism_substance", "pathological_formation", "simple_chemical", "tissue", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:18:57Z
--- widget: - text: "Mutations in KRAS gene drive oncogenic transformation." - text: "The tumor suppressor p53 pathway was disrupted." - text: "EGFR amplification promotes cancer cell proliferation." - text: "Loss of function of the PTEN gene is common in many cancers." - text: "The PI3K/AKT/mTOR pathway is a critical regulator of cell growth." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - cancer-genetics - oncology - gene-regulation - cancer-research - amino_acid - anatomical_system - cancer - cell - cellular_component - developing_anatomical_structure - gene_or_gene_product - immaterial_anatomical_entity - multi-tissue_structure - organ - organism - organism_subdivision - organism_substance - pathological_formation - simple_chemical - tissue language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Oncology-Small-166M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Small-166M) **Specialized model for Cancer Genetics - Cancer-related genetic entities** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Oncology-focused model for **cancer genetics**, capturing genes, variants, and cellular processes in tumor biology contexts.Useful for **cancer pathway curation**, **driver gene analysis**, and **precision oncology literature mining**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated BIONLP2013_CG dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `Amino_acid` - `Anatomical_system` - `Cancer` - `Cell` - `Cellular_component` <details> <summary>See 11 more entity types...</summary> - `Developing_anatomical_structure` - `Gene_or_gene_product` - `Immaterial_anatomical_entity` - `Multi-tissue_structure` - `Organ` - `Organism` - `Organism_subdivision` - `Organism_substance` - `Pathological_formation` - `Simple_chemical` - `Tissue` </details> **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset BioNLP 2013 CG corpus targets cancer genetics entities for oncology research and cancer genomics. The BioNLP 2013 CG (Cancer Genetics) corpus is a specialized dataset focusing on cancer genetics entities and gene regulation in oncology research. This corpus contains annotations for genes, proteins, and molecular processes specifically related to cancer biology and tumor genetics. Developed for the BioNLP Shared Task 2013, it supports the development of text mining systems for cancer research, oncological studies, and precision medicine applications. The dataset is particularly valuable for identifying cancer-related biomarkers, tumor suppressor genes, oncogenes, and therapeutic targets mentioned in cancer research literature. It serves as a benchmark for evaluating NER systems used in cancer genomics, personalized medicine, and oncology informatics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.70` - **F1 Improvement vs Base Model**: `62.3%` ### ๐Ÿ† Top F1 Improvements on BIONLP2013_CG Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Oncology-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Large-459M) | 0.5534 | 0.8990 | 0.3456 | 62.5% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Oncology-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Medium-209M) | 0.4885 | 0.8765 | 0.3880 | 79.4% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Oncology-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-XLarge-770M) | 0.5953 | 0.8750 | 0.2797 | 47.0% | | 4 | [OpenMed-ZeroShot-NER-Oncology-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Base-220M) | 0.5324 | 0.8167 | 0.2842 | 53.4% | | 5 | [OpenMed-ZeroShot-NER-Oncology-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Multi-209M) | 0.4343 | 0.7498 | 0.3154 | 72.6% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Small-166M model_name = "OpenMed/OpenMed-ZeroShot-NER-Oncology-Small-166M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Oncology-Small-166M") # Example usage with default entity types text = "Mutations in KRAS gene drive oncogenic transformation." labels = ['Amino_acid', 'Anatomical_system', 'Cancer', 'Cell', 'Cellular_component', 'Developing_anatomical_structure', 'Gene_or_gene_product', 'Immaterial_anatomical_entity', 'Multi-tissue_structure', 'Organ', 'Organism', 'Organism_subdivision', 'Organism_substance', 'Pathological_formation', 'Simple_chemical', 'Tissue'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "Amino_acid", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: BIONLP2013_CG - **Description**: Cancer Genetics - Cancer-related genetic entities ### Training Details - **Base Model**: gliner_small-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Medium-209M
OpenMed
2025-09-15T21:18:43Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "leukemia", "hematology", "cancer", "clinical-medicine", "disease", "gene", "protein", "treatment", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:18:26Z
--- widget: - text: "The patient presented with chronic lymphocytic leukemia symptoms." - text: "B-cell proliferation was observed in bone marrow samples." - text: "Treatment with ibrutinib showed promising results." - text: "Flow cytometry confirmed the diagnosis of chronic lymphocytic leukemia." - text: "The patient had del(17p), a high-risk feature in CLL." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - leukemia - hematology - cancer - clinical-medicine - disease - gene - protein - treatment language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-BloodCancer-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Medium-209M) **Specialized model for Clinical Entity Recognition - Clinical entities related to Chronic Lymphocytic Leukemia** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Domain-tuned for **Chronic Lymphocytic Leukemia (CLL)** terminology, capturing disease descriptors, biomarkers, and therapies.Supports **hematology research**, **treatment response analysis**, and **clinical evidence tracking**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated CLL dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `CL` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset CLL corpus is specialized for chronic lymphocytic leukemia entity recognition in hematology and cancer research. The CLL (Chronic Lymphocytic Leukemia) corpus is a domain-specific biomedical NER dataset focused on entities related to chronic lymphocytic leukemia, a type of blood cancer. This specialized corpus contains annotations for CLL-specific terminology, biomarkers, treatment entities, and clinical concepts relevant to hematology and oncology research. The dataset is designed to support the development of clinical NLP systems for leukemia research, hematological disorder analysis, and cancer informatics applications. It is particularly valuable for identifying disease-specific entities, therapeutic interventions, and prognostic factors mentioned in CLL research literature. The corpus serves as a benchmark for evaluating NER models in specialized medical domains and clinical research. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.91` - **F1 Improvement vs Base Model**: `80.2%` ### ๐Ÿ† Top F1 Improvements on CLL Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-BloodCancer-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Medium-209M) | 0.5068 | 0.9130 | 0.4062 | 80.2% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-BloodCancer-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-XLarge-770M) | 0.7291 | 0.8750 | 0.1459 | 20.0% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-BloodCancer-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Large-459M) | 0.6009 | 0.7755 | 0.1746 | 29.0% | | 4 | [OpenMed-ZeroShot-NER-BloodCancer-Small-166M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Small-166M) | 0.5505 | 0.6818 | 0.1314 | 23.9% | | 5 | [OpenMed-ZeroShot-NER-BloodCancer-Tiny-60M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Tiny-60M) | 0.5361 | 0.6780 | 0.1419 | 26.5% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Medium-209M model_name = "OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Medium-209M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-BloodCancer-Medium-209M") # Example usage with default entity types text = "The patient presented with chronic lymphocytic leukemia symptoms." labels = ['CL'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "CL", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: CLL - **Description**: Clinical Entity Recognition - Clinical entities related to Chronic Lymphocytic Leukemia ### Training Details - **Base Model**: gliner_medium-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
BinhQuocNguyen/food-recognition-vit
BinhQuocNguyen
2025-09-15T21:18:42Z
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "food-recognition", "computer-vision", "pytorch", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-09-15T20:46:56Z
--- license: mit tags: - food-recognition - computer-vision - image-classification - vit - pytorch pipeline_tag: image-classification library_name: transformers --- # Food Recognition Model A Vision Transformer (ViT) fine-tuned for food recognition and classification. This model can identify 10 different types of food from images. ## Model Description This model is based on Google's Vision Transformer (ViT-Base) and has been fine-tuned on a custom food dataset. It can classify images into 10 different food categories with high accuracy. ## Food Classes The model can recognize the following food types: - apple_pie - caesar_salad - chocolate_cake - cup_cakes - donuts - hamburger - ice_cream - pancakes - pizza - waffles ## Model Performance - **Accuracy**: 68.0% - **F1 Score**: 66.5% - **Precision**: 68.2% - **Recall**: 68.0% ## Usage ### Using the Pipeline ```python from transformers import pipeline # Load the model classifier = pipeline("image-classification", model="BinhQuocNguyen/food-recognition-vit") # Classify an image result = classifier("path/to/your/food_image.jpg") print(result) ``` ### Using the Model Directly ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import torch # Load model and processor processor = AutoImageProcessor.from_pretrained("BinhQuocNguyen/food-recognition-vit") model = AutoModelForImageClassification.from_pretrained("BinhQuocNguyen/food-recognition-vit") # Load and process image image = Image.open("path/to/your/food_image.jpg") inputs = processor(image, return_tensors="pt") # Get predictions with torch.no_grad(): outputs = model(**inputs) predictions = torch.nn.functional.softmax(outputs.logits, dim=-1) # Get top prediction predicted_class_id = predictions.argmax().item() predicted_class = model.config.id2label[str(predicted_class_id)] confidence = predictions[0][predicted_class_id].item() print(f"Predicted: {predicted_class} ({confidence:.3f})") ``` ## Training Details - **Base Model**: google/vit-base-patch16-224 - **Training Framework**: PyTorch with Transformers - **Dataset**: Custom food recognition dataset - **Classes**: 10 food categories - **Image Size**: 224x224 pixels - **Training Time**: ~84 minutes ## Limitations - The model is trained on a specific set of food categories and may not generalize well to other food types - Performance may vary depending on image quality, lighting, and angle - The model works best with clear, well-lit images of food ## Citation If you use this model in your research, please cite: ```bibtex @misc{food-recognition-model, title={Food Recognition Model}, author={BinhQuocNguyen}, year={2025}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/BinhQuocNguyen/food-recognition-vit}} } ``` ## License This model is released under the MIT License.
OpenMed/OpenMed-ZeroShot-NER-Organism-Large-459M
OpenMed
2025-09-15T21:18:12Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "species-recognition", "taxonomy", "organism-identification", "biodiversity", "species", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:17:41Z
--- widget: - text: "Caenorhabditis elegans is a model organism for genetic studies." - text: "The research focused on Drosophila melanogaster development." - text: "Arabidopsis thaliana serves as a model for plant biology." - text: "The zebrafish, Danio rerio, is widely used for studying vertebrate development." - text: "Neurospora crassa is a type of red bread mold used in genetic research." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - species-recognition - taxonomy - organism-identification - biodiversity - species language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Organism-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Organism-Large-459M) **Specialized model for Species Entity Recognition - Species names from the Species-800 dataset** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Optimized for **species identification** in scientific text, covering a wide range of taxa and naming variants.Useful for **ecology studies**, **organism tagging**, and **biocuration**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated SPECIES800 dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `SPECIES` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset Species800 is a corpus for species recognition and taxonomy classification in biomedical texts. The Species800 corpus is a manually annotated dataset designed for species recognition and taxonomic classification in biomedical literature. This corpus contains 800 abstracts with comprehensive annotations for organism mentions, supporting biodiversity informatics and biological taxonomy research. The dataset includes both scientific names and common names of species, making it valuable for developing NER systems that can handle the complexity of biological nomenclature. It serves as a benchmark for evaluating species identification models used in ecological studies, conservation biology, and systematic biology research. The corpus is particularly useful for text mining applications in biodiversity databases and biological literature analysis. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.85` - **F1 Improvement vs Base Model**: `33.8%` ### ๐Ÿ† Top F1 Improvements on SPECIES800 Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Organism-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Organism-Large-459M) | 0.6329 | 0.8471 | 0.2142 | 33.8% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Organism-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Organism-Medium-209M) | 0.6140 | 0.8257 | 0.2117 | 34.5% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Organism-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Organism-XLarge-770M) | 0.6111 | 0.8256 | 0.2145 | 35.1% | | 4 | [OpenMed-ZeroShot-NER-Organism-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Organism-Base-220M) | 0.5853 | 0.7717 | 0.1864 | 31.8% | | 5 | [OpenMed-ZeroShot-NER-Organism-Small-166M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Organism-Small-166M) | 0.5931 | 0.7092 | 0.1161 | 19.6% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Organism-Large-459M model_name = "OpenMed/OpenMed-ZeroShot-NER-Organism-Large-459M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Organism-Large-459M") # Example usage with default entity types text = "Caenorhabditis elegans is a model organism for genetic studies." labels = ['SPECIES'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "SPECIES", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: SPECIES800 - **Description**: Species Entity Recognition - Species names from the Species-800 dataset ### Training Details - **Base Model**: gliner_large-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
NicoShareiThesis/Llama3.2-1300examples
NicoShareiThesis
2025-09-15T21:17:49Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T21:11:17Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** NicoShareiThesis - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
luckeciano/Qwen-2.5-7B-GRPO-Base-KL-0.01-v2_6050
luckeciano
2025-09-15T21:17:49Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-15T17:22:50Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-Base-KL-0.01-v2_6050 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-Base-KL-0.01-v2_6050 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-KL-0.01-v2_6050", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/4hwkdmjv) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
OpenMed/OpenMed-ZeroShot-NER-Oncology-Large-459M
OpenMed
2025-09-15T21:17:27Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "cancer-genetics", "oncology", "gene-regulation", "cancer-research", "amino_acid", "anatomical_system", "cancer", "cell", "cellular_component", "developing_anatomical_structure", "gene_or_gene_product", "immaterial_anatomical_entity", "multi-tissue_structure", "organ", "organism", "organism_subdivision", "organism_substance", "pathological_formation", "simple_chemical", "tissue", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:16:57Z
--- widget: - text: "Mutations in KRAS gene drive oncogenic transformation." - text: "The tumor suppressor p53 pathway was disrupted." - text: "EGFR amplification promotes cancer cell proliferation." - text: "Loss of function of the PTEN gene is common in many cancers." - text: "The PI3K/AKT/mTOR pathway is a critical regulator of cell growth." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - cancer-genetics - oncology - gene-regulation - cancer-research - amino_acid - anatomical_system - cancer - cell - cellular_component - developing_anatomical_structure - gene_or_gene_product - immaterial_anatomical_entity - multi-tissue_structure - organ - organism - organism_subdivision - organism_substance - pathological_formation - simple_chemical - tissue language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Oncology-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Large-459M) **Specialized model for Cancer Genetics - Cancer-related genetic entities** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Oncology-focused model for **cancer genetics**, capturing genes, variants, and cellular processes in tumor biology contexts.Useful for **cancer pathway curation**, **driver gene analysis**, and **precision oncology literature mining**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated BIONLP2013_CG dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `Amino_acid` - `Anatomical_system` - `Cancer` - `Cell` - `Cellular_component` <details> <summary>See 11 more entity types...</summary> - `Developing_anatomical_structure` - `Gene_or_gene_product` - `Immaterial_anatomical_entity` - `Multi-tissue_structure` - `Organ` - `Organism` - `Organism_subdivision` - `Organism_substance` - `Pathological_formation` - `Simple_chemical` - `Tissue` </details> **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset BioNLP 2013 CG corpus targets cancer genetics entities for oncology research and cancer genomics. The BioNLP 2013 CG (Cancer Genetics) corpus is a specialized dataset focusing on cancer genetics entities and gene regulation in oncology research. This corpus contains annotations for genes, proteins, and molecular processes specifically related to cancer biology and tumor genetics. Developed for the BioNLP Shared Task 2013, it supports the development of text mining systems for cancer research, oncological studies, and precision medicine applications. The dataset is particularly valuable for identifying cancer-related biomarkers, tumor suppressor genes, oncogenes, and therapeutic targets mentioned in cancer research literature. It serves as a benchmark for evaluating NER systems used in cancer genomics, personalized medicine, and oncology informatics. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.90` - **F1 Improvement vs Base Model**: `62.5%` ### ๐Ÿ† Top F1 Improvements on BIONLP2013_CG Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Oncology-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Large-459M) | 0.5534 | 0.8990 | 0.3456 | 62.5% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Oncology-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Medium-209M) | 0.4885 | 0.8765 | 0.3880 | 79.4% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Oncology-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-XLarge-770M) | 0.5953 | 0.8750 | 0.2797 | 47.0% | | 4 | [OpenMed-ZeroShot-NER-Oncology-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Base-220M) | 0.5324 | 0.8167 | 0.2842 | 53.4% | | 5 | [OpenMed-ZeroShot-NER-Oncology-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Multi-209M) | 0.4343 | 0.7498 | 0.3154 | 72.6% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Oncology-Large-459M model_name = "OpenMed/OpenMed-ZeroShot-NER-Oncology-Large-459M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Oncology-Large-459M") # Example usage with default entity types text = "Mutations in KRAS gene drive oncogenic transformation." labels = ['Amino_acid', 'Anatomical_system', 'Cancer', 'Cell', 'Cellular_component', 'Developing_anatomical_structure', 'Gene_or_gene_product', 'Immaterial_anatomical_entity', 'Multi-tissue_structure', 'Organ', 'Organism', 'Organism_subdivision', 'Organism_substance', 'Pathological_formation', 'Simple_chemical', 'Tissue'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "Amino_acid", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: BIONLP2013_CG - **Description**: Cancer Genetics - Cancer-related genetic entities ### Training Details - **Base Model**: gliner_large-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Large-459M
OpenMed
2025-09-15T21:16:43Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "leukemia", "hematology", "cancer", "clinical-medicine", "disease", "gene", "protein", "treatment", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:16:15Z
--- widget: - text: "The patient presented with chronic lymphocytic leukemia symptoms." - text: "B-cell proliferation was observed in bone marrow samples." - text: "Treatment with ibrutinib showed promising results." - text: "Flow cytometry confirmed the diagnosis of chronic lymphocytic leukemia." - text: "The patient had del(17p), a high-risk feature in CLL." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - leukemia - hematology - cancer - clinical-medicine - disease - gene - protein - treatment language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-BloodCancer-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Large-459M) **Specialized model for Clinical Entity Recognition - Clinical entities related to Chronic Lymphocytic Leukemia** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Domain-tuned for **Chronic Lymphocytic Leukemia (CLL)** terminology, capturing disease descriptors, biomarkers, and therapies.Supports **hematology research**, **treatment response analysis**, and **clinical evidence tracking**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated CLL dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `CL` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset CLL corpus is specialized for chronic lymphocytic leukemia entity recognition in hematology and cancer research. The CLL (Chronic Lymphocytic Leukemia) corpus is a domain-specific biomedical NER dataset focused on entities related to chronic lymphocytic leukemia, a type of blood cancer. This specialized corpus contains annotations for CLL-specific terminology, biomarkers, treatment entities, and clinical concepts relevant to hematology and oncology research. The dataset is designed to support the development of clinical NLP systems for leukemia research, hematological disorder analysis, and cancer informatics applications. It is particularly valuable for identifying disease-specific entities, therapeutic interventions, and prognostic factors mentioned in CLL research literature. The corpus serves as a benchmark for evaluating NER models in specialized medical domains and clinical research. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.78` - **F1 Improvement vs Base Model**: `29.0%` ### ๐Ÿ† Top F1 Improvements on CLL Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-BloodCancer-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Medium-209M) | 0.5068 | 0.9130 | 0.4062 | 80.2% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-BloodCancer-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-XLarge-770M) | 0.7291 | 0.8750 | 0.1459 | 20.0% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-BloodCancer-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Large-459M) | 0.6009 | 0.7755 | 0.1746 | 29.0% | | 4 | [OpenMed-ZeroShot-NER-BloodCancer-Small-166M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Small-166M) | 0.5505 | 0.6818 | 0.1314 | 23.9% | | 5 | [OpenMed-ZeroShot-NER-BloodCancer-Tiny-60M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Tiny-60M) | 0.5361 | 0.6780 | 0.1419 | 26.5% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Large-459M model_name = "OpenMed/OpenMed-ZeroShot-NER-BloodCancer-Large-459M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-BloodCancer-Large-459M") # Example usage with default entity types text = "The patient presented with chronic lymphocytic leukemia symptoms." labels = ['CL'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "CL", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: CLL - **Description**: Clinical Entity Recognition - Clinical entities related to Chronic Lymphocytic Leukemia ### Training Details - **Base Model**: gliner_large-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
sysresearch101/t5-large-finetuned-xsum
sysresearch101
2025-09-15T21:15:23Z
6
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "summarization", "t5-large-summarization", "pipeline:summarization", "en", "dataset:EdinburghNLP/xsum", "arxiv:2211.08412", "base_model:google-t5/t5-large", "base_model:finetune:google-t5/t5-large", "license:mit", "model-index", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-07-26T14:55:54Z
--- language: - en license: mit tags: - summarization - t5-large-summarization - pipeline:summarization model-index: - name: sysresearch101/t5-large-finetuned-xsum results: - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: test metrics: - type: rouge value: 26.8921 name: ROUGE-1 verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmFkMTFiNmM3YmRkZDk1Y2FhM2EwOTdiYmUwYjBhMGEzZmIyZmIwNWI5OTVmY2U0N2QzYzgxYzM0OTEzMjFjNSIsInZlcnNpb24iOjF9.fOq4zI_BWvTLFJFQOWNk3xEsDIu3aAeboGYPw5TiBqdJJjvdyKmLbfj2WVnNboWbrmp1PuL01iJjTi2Xj6PUAA - type: rouge value: 6.9411 name: ROUGE-2 verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTBlZmI3NjQ3M2JiYzI4MTg3YmJkMjg0ZmE5MDUwNzljNTYyM2M0NzA3YTNiNTA2Nzk4MDhhYWZjZjgyMmE1MCIsInZlcnNpb24iOjF9.rH0DY2hMz2rXaK29vkt7xah-3G95rY4MOS2oVKjXmw4TijB-ZVytfLJAlBmyqA8HYAythRCywmLSjjCDWc66Cg - type: rouge value: 21.2832 name: ROUGE-L verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODAwZDYzNTc0NjZhNzNiMDE2ZDY2NjNjNmViNTc0NDVjNTZkYjljODhmYmNiMWFhY2NkZjU5MzQ0NmM0OTcyMSIsInZlcnNpb24iOjF9.5duHtdjZ8dwtbp1HKyMR4mVK9IIlEZvuWGjQMErpE7VNyKPhMOT6Avh_vXFQz6q_jBzqpZGGREho1mt50yBsDw - type: rouge value: 21.284 name: ROUGE-LSUM verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGQ2NmNhZTZmZDFkNTcyYjQ4MjhhYWJhODY1ZGRjODY2ZTE5MmRmZDRlYTk4NWE4YWM1OWY2M2NjOWQ3YzU0OCIsInZlcnNpb24iOjF9.SJ8xTcAVWrRDmJmQoxE1ADIcdGA4tr3V04Lv0ipMJiUksCdNC7FO8jYbjG9XmiqbDnnr5h4XoK4JB4-GsA-gDA - type: loss value: 2.5411810874938965 name: loss verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGViNTVlNGI0Njk4NmZmZjExNDBkNTQ4N2FhMzRkZjRjNDNlYzFhZDIyMjJhMmFiM2ZhMTQzYTM4YzNkNWVlNyIsInZlcnNpb24iOjF9.p9n2Kf48k9F9Bkk9j7UKRayvVmOr7_LV80T0ti4lUWFtTsZ91Re841xnEAcKSYgQ9-Bni56ldq9js3kunspJCw - type: gen_len value: 18.7755 name: gen_len verified: true verifyToken: >- eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmQ1ZWUxNmFjNmU0OGI4MDQyZDNjMWQwZGViNDhlMzE1OGE3YmYwYzZjYmM1NWEwMjk2MDFiMjQ4ZThhMjg5YyIsInZlcnNpb24iOjF9.aNp-NFzBSm84GnXuDtYuHaOsSk7zw8kjCphowYFciwt-aDnhwwurYIr59kMT8JNFMnRInsDi8tvYdapareV3DA datasets: - EdinburghNLP/xsum base_model: - google-t5/t5-large --- # T5-Large Fine-tuned on XSum **Task:** Abstractive Summarization (English) **Base Model:** google-t5/t5-large **License:** MIT ## Overview This model is a T5-Large checkpoint fine-tuned exclusively on the [XSum](https://huggingface.co/datasets/EdinburghNLP/xsum) dataset. It specializes in generating concise, single-sentence summaries in the style of BBC article abstracts. ## Performance ~ On XSum test set | Metric | Score | |--------|-------| | ROUGE-1 | 26.89 | | ROUGE-2 | 6.94 | | ROUGE-L | 21.28 | | Loss | 2.54 | | Avg. Length | 18.77 tokens | ## Usage ### Quick Start ```python from transformers import pipeline summarizer = pipeline("summarization", model="sysresearch101/t5-large-finetuned-xsum") article = "Your article text here..." summary = summarizer(article, max_length=80, min_length=20, do_sample=False) print(summary[0]['summary_text']) ``` ### Advanced Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sysresearch101/t5-large-finetuned-xsum") model = AutoModelForSeq2SeqLM.from_pretrained("sysresearch101/t5-large-finetuned-xsum") inputs = tokenizer("summarize: " + article, return_tensors="pt", max_length=512, truncation=True) outputs = model.generate( **inputs, max_length=80, min_length=20, num_beams=4, no_repeat_ngram_size=2, length_penalty=1.0, repetition_penalty=2.5, use_cache=True, early_stopping=True do_sample = True, temperature = 0.8, top_k = 50, top_p = 0.95 ) summary = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ## Training Data - [XSum](https://huggingface.co/datasets/EdinburghNLP/xsum): BBC articles paired with professionally written single-sentence summaries ## Intended Use - **Primary:** Summarization - **Secondary:** Research on extreme summarization, single-sentence summary generation, Educational demonstrations, comparative studies with multi-sentence models - **Not recommended:** Multi-sentence summarization tasks, production use without validation ## Limitations - Trained only on news domain; may not generalize to other text types - Generates very short summaries (average ~19 tokens) - May oversimplify complex topics due to single-sentence constraint ## Citation ```bibtex @misc{stept2023_t5_large_xsum, author = {Shlomo Stept (sysresearch101)}, title = {T5-Large Fine-tuned on XSum for Abstractive Summarization}, year = {2023}, publisher = {Hugging Face}, url = {https://huggingface.co/sysresearch101/t5-large-finetuned-xsum} } ``` ## Papers Using This Model * [Tam et al. (2023). *Evaluating the Factual Consistency of Large Language Models Through Summarization (FIB).* Findings of ACL 2023.](https://arxiv.org/pdf/2211.08412) * [Liu et al. (2024). *LLMs as Narcissistic Evaluators: When Ego Inflates Evaluation Scores.* Findings of ACL 2024.](https://aclanthology.org/2024.findings-acl.753.pdf) * [Zhu et al. (2024). *MTAS: A Reference-Free Approach for Evaluating Abstractive Summarization Systems.* Proceedings of the ACM on SE (FSE 2024).](https://doi.org/10.1145/3660820) ## Contact Created by [Shlomo Stept](https://shlomostept.com) ([ORCID: 0009-0009-3185-589X](https://orcid.org/0009-0009-3185-589X)) DARMIS AI - Website: [shlomostept.com](https://shlomostept.com) - LinkedIn: [linkedin.com/in/shlomo-stept](https://linkedin.com/in/shlomo-stept)
OpenMed/OpenMed-ZeroShot-NER-Anatomy-Large-459M
OpenMed
2025-09-15T21:15:03Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "anatomical-entity-recognition", "medical-terminology", "anatomy", "healthcare", "body_part", "organ", "en", "arxiv:2508.01630", "license:apache-2.0", "region:us" ]
token-classification
2025-09-15T21:14:35Z
--- widget: - text: "The patient complained of pain in the left ventricle region." - text: "Examination revealed inflammation of the hippocampus." - text: "The liver showed signs of fatty infiltration." - text: "An MRI of the cerebrum showed no signs of abnormalities." - text: "The procedure involved an incision near the femoral artery." tags: - token-classification - entity recognition - named-entity-recognition - zero-shot - zero-shot-ner - zero shot - biomedical-nlp - gliner - anatomical-entity-recognition - medical-terminology - anatomy - healthcare - anatomy - body_part - organ language: - en license: apache-2.0 --- # ๐Ÿงฌ [OpenMed-ZeroShot-NER-Anatomy-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Anatomy-Large-459M) **Specialized model for Anatomical Entity Recognition - Anatomical structures and body parts** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Python](https://img.shields.io/badge/Python-3.11%2B-blue)]() [![GliNER](https://img.shields.io/badge/๐Ÿค—-GliNER-yellow)]() [![OpenMed](https://img.shields.io/badge/๐Ÿฅ-OpenMed-green)](https://huggingface.co/OpenMed) ## ๐Ÿ“‹ Model Overview Tailored to **anatomical structure recognition**, including organs, tissues, and substructures in clinical narratives.Supports **radiology and surgical note parsing**, **site-of-disease extraction**, and **anatomy-aware analytics**. OpenMed ZeroShot NER is an advanced, domain-adapted Named Entity Recognition (NER) model designed specifically for medical, biomedical, and clinical text mining. Leveraging state-of-the-art zero-shot learning, this model empowers researchers, clinicians, and data scientists to extract expert-level biomedical entitiesโ€”such as diseases, chemicals, genes, species, and clinical findingsโ€”directly from unstructured text, without the need for task-specific retraining. Built on the robust GLiNER architecture and fine-tuned on curated biomedical corpora, OpenMed ZeroShot NER delivers high-precision entity recognition for critical healthcare and life sciences applications. Its zero-shot capability means you can flexibly define and extract any entity type relevant to your workflow, from standard biomedical categories to custom clinical concepts, supporting rapid adaptation to new research domains and regulatory requirements. Whether you are working on clinical NLP, biomedical research, electronic health record (EHR) de-identification, or large-scale literature mining, OpenMed ZeroShot NER provides a production-ready, open-source solution that combines expert-level accuracy with unmatched flexibility. Join the OpenMed community to accelerate your medical text analytics with cutting-edge, zero-shot NER technology. ### ๐ŸŽฏ Key Features - **Zero-Shot Capability**: Can recognize any entity type without specific training - **High Precision**: Optimized for biomedical entity recognition - **Domain-Specific**: Fine-tuned on curated ANATOMY dataset - **Production-Ready**: Validated on clinical benchmarks - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem - **Flexible Entity Recognition**: Add custom entity types without retraining ### ๐Ÿท๏ธ Supported Entity Types This zero-shot model can identify and classify biomedical entities, including but not limited to these entity types. **You can also add custom entity types without retraining the model**: - `Anatomy` **๐Ÿ’ก Zero-Shot Flexibility**: As a GliNER-based model, you can specify any entity types you want to detect, even if they weren't part of the original training. Simply provide the entity labels when using the model, and it will adapt to recognize them. ## ๐Ÿ“Š Dataset Anatomy corpus focuses on anatomical entity recognition for medical terminology and healthcare applications. The Anatomy corpus is a specialized biomedical NER dataset designed for recognizing anatomical entities and medical terminology in clinical and biomedical texts. This corpus contains annotations for anatomical structures, body parts, organs, and physiological systems mentioned in medical literature. It is essential for developing clinical NLP systems, medical education tools, and healthcare informatics applications where accurate anatomical entity identification is crucial. The dataset supports the development of automated systems for medical coding, clinical decision support, and anatomical knowledge extraction from medical records and literature. It serves as a valuable resource for training NER models used in medical imaging, surgical planning, and clinical documentation. ## ๐Ÿ“Š Performance Metrics ### Current Model Performance - **Finetuned F1 vs. Base Model (on test dataset excluded from training)**: `0.93` - **F1 Improvement vs Base Model**: `211.3%` ### ๐Ÿ† Top F1 Improvements on ANATOMY Dataset | Rank | Model | Base F1 | Finetuned F1 | ฮ”F1 | ฮ”F1 % | |------|-------|--------:|------------:|----:|------:| | ๐Ÿฅ‡ 1 | [OpenMed-ZeroShot-NER-Anatomy-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Anatomy-Large-459M) | 0.2978 | 0.9271 | 0.6293 | 211.3% | | ๐Ÿฅˆ 2 | [OpenMed-ZeroShot-NER-Anatomy-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Anatomy-Medium-209M) | 0.3172 | 0.9114 | 0.5942 | 187.3% | | ๐Ÿฅ‰ 3 | [OpenMed-ZeroShot-NER-Anatomy-XLarge-770M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Anatomy-XLarge-770M) | 0.3780 | 0.9021 | 0.5241 | 138.7% | | 4 | [OpenMed-ZeroShot-NER-Anatomy-Base-220M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Anatomy-Base-220M) | 0.2804 | 0.8627 | 0.5823 | 207.7% | | 5 | [OpenMed-ZeroShot-NER-Anatomy-Multi-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Anatomy-Multi-209M) | 0.3121 | 0.8091 | 0.4969 | 159.2% | *Rankings are sorted by finetuned F1 and show ฮ”F1% over base model. Test dataset is excluded from training.* ![OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed-zero-shot-clinical-ner-finetuned.png) *Figure: OpenMed ZeroShot Clinical & Biomedical NER vs. Original GLiNER models.* ## ๐Ÿš€ Quick Start ### Installation ```bash pip install gliner==0.2.21 ``` ### Usage ```python from transformers import pipeline # Load the model and tokenizer # Model: https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Anatomy-Large-459M model_name = "OpenMed/OpenMed-ZeroShot-NER-Anatomy-Large-459M" from gliner import GLiNER model = GLiNER.from_pretrained("OpenMed-ZeroShot-NER-Anatomy-Large-459M") # Example usage with default entity types text = "The patient complained of pain in the left ventricle region." labels = ['Anatomy'] entities = model.predict_entities(text, labels, flat_ner=True, threshold=0.5) for entity in entities: print(entity) ``` ### Zero-Shot Usage with Custom Entity Types ๐Ÿ’ก **Tip:** If you want to extract entities that are not present in the original training set (i.e., use custom or rare entity types), you may get better results by lowering the `threshold` parameter in `model.predict_entities`. For example, try `threshold=0.3` or even lower, depending on your use case: ```python # You can specify custom entity types for zero-shot recognition - for instance: custom_entities = ["MISC", "Anatomy", "PERSON", "LOCATION", "MEDICATION", "PROCEDURE"] entities = model.predict_entities(text, custom_entities, flat_ner=True, threshold=0.1) for entity in entities: print(entity) ``` > Lowering the threshold makes the model more permissive and can help it recognize new or less common entity types, but may also increase false positives. Adjust as needed for your application. ## ๐Ÿ“š Dataset Information - **Dataset**: ANATOMY - **Description**: Anatomical Entity Recognition - Anatomical structures and body parts ### Training Details - **Base Model**: gliner_large-v2.1 - **Training Framework**: Hugging Face Transformers - **Optimization**: AdamW optimizer with learning rate scheduling - **Validation**: Cross-validation on held-out test set ## ๐Ÿ’ก Use Cases This model is particularly useful for: - **Clinical Text Mining**: Extracting entities from medical records - **Biomedical Research**: Processing scientific literature - **Drug Discovery**: Identifying chemical compounds and drugs - **Healthcare Analytics**: Analyzing patient data and outcomes - **Academic Research**: Supporting biomedical NLP research - **Custom Entity Recognition**: Zero-shot detection of domain-specific entities ## ๐Ÿ”ฌ Model Architecture - **Task**: Zero-Shot Classification (Named Entity Recognition) - **Labels**: Dataset-specific entity types - **Input**: Biomedical text - **Output**: Named entity predictions ## ๐Ÿ“œ License Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details. ## ๐Ÿค Contributing I welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join my mission to advance open-source Healthcare AI, I'd love to hear from you. Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face ๐Ÿค— and click "Watch" to stay updated on my latest releases and developments. ## Citation If you use this model in your research or applications, please cite the following paper: ```latex @misc{panahi2025openmedneropensourcedomainadapted, title={OpenMed NER: Open-Source, Domain-Adapted State-of-the-Art Transformers for Biomedical NER Across 12 Public Datasets}, author={Maziyar Panahi}, year={2025}, eprint={2508.01630}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.01630}, } ``` Proper citation helps support and acknowledge my work. Thank you!
AlaminI/gemma-3-270m-finetuned
AlaminI
2025-09-15T21:13:53Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:google/gemma-3-270m", "lora", "sft", "transformers", "trl", "text-generation", "arxiv:1910.09700", "base_model:google/gemma-3-270m", "region:us" ]
text-generation
2025-09-15T21:13:27Z
--- base_model: google/gemma-3-270m library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:google/gemma-3-270m - lora - sft - transformers - trl --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1