modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Jellywibble/tl_dpo_test_merged
Jellywibble
2025-05-21T16:12:25Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T16:04:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Abdoul-AI/Devstral-Small-2505-GGUF
Abdoul-AI
2025-05-21T16:12:18Z
0
0
vllm
[ "vllm", "autoquant", "gguf", "text2text-generation", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "license:apache-2.0", "region:us" ]
text2text-generation
2025-05-21T16:12:17Z
--- language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Devstrall-Small-2505 extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. pipeline_tag: text2text-generation tags: - autoquant - gguf --- # Devstral-Small-2505 Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results). It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed. For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community. Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral). ## Key Features: - **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents. - **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use. - **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window**: A 128k context window. - **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark Results ### SWE-Bench Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%. | Model | Scaffold | SWE-Bench Verified (%) | |------------------|--------------------|------------------------| | Devstral | OpenHands Scaffold | **46.8** | | GPT-4.1-mini | OpenAI Scaffold | 23.6 | | Claude 3.5 Haiku | Anthropic Scaffold | 40.6 | | SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 | When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B. ![SWE Benchmark](assets/swe_bench.png) ## Usage We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold. You can use it either through our API or by running locally. ### API Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key. Then run these commands to start the OpenHands docker container. ```bash export MISTRAL_API_KEY=<MY_KEY> docker pull docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik mkdir -p ~/.openhands-state && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2505","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.39 ``` ### Local inference The model can also be deployed with the following libraries: - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended) - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference) - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers) - [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio) - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama) ### OpenHands (recommended) #### Launch a server to deploy Devstral-Small-2505 Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral-Small-2505`. In the case of the tutorial we spineed up a vLLM server running the command: ```bash vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` The server address should be in the following format: `http://<your-server-url>:8000/v1` #### Launch OpenHands You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation). The easiest way to launch OpenHands is to use the Docker image: ```bash docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.38 ``` Then, you can access the OpenHands UI at `http://localhost:3000`. #### Connect to the server When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier. Fill the following fields: - **Custom Model**: `openai/mistralai/Devstral-Small-2505` - **Base URL**: `http://<your-server-url>:8000/v1` - **API Key**: `token` (or any other token you used to launch the server if any) #### Use OpenHands powered by Devstral Now you're good to use Devstral Small inside OpenHands by **starting a new conversation**. Let's build a To-Do list app. <details> <summary>To-Do list app</summary 1. Let's ask Devstral to generate the app with the following prompt: ```txt Build a To-Do list app with the following requirements: - Built using FastAPI and React. - Make it a one page app that: - Allows to add a task. - Allows to delete a task. - Allows to mark a task as done. - Displays the list of tasks. - Store the tasks in a SQLite database. ``` ![Agent prompting](assets/tuto_open_hands/agent_prompting.png) 2. Let's see the result You should see the agent construct the app and be able to explore the code it generated. If it doesn't do it automatically, ask Devstral to deploy the app or do it manually, and then go the front URL deployment to see the app. ![Agent working](assets/tuto_open_hands/agent_working.png) ![App UI](assets/tuto_open_hands/app_ui.png) 3. Iterate Now that you have a first result you can iterate on it by asking your agent to improve it. For example, in the app generated we could click on a task to mark it checked but having a checkbox would improve UX. You could also ask it to add a feature to edit a task, or to add a feature to filter the tasks by status. Enjoy building with Devstral Small and OpenHands! </details> ### vLLM (recommended) We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **_Installation_** Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/releases/tag/v0.8.5): ``` pip install vllm --upgrade ``` Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5). To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Server We recommand that you use Devstral in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` 2. To ping the client you can use a simple Python snippet. ```py import requests import json from huggingface_hub import hf_hub_download url = "http://<your-server-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Devstral-Small-2505" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "<your-command>", }, ], }, ] data = {"model": model, "messages": messages, "temperature": 0.15} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) ``` ### Mistral-inference We recommend using mistral-inference to quickly try out / "vibe-check" Devstral. #### Install Make sure to have mistral_inference >= 1.6.0 installed. ```bash pip install mistral_inference --upgrade ``` #### Download ```python from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Devstral-Small-2505", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path) ``` #### Python You can run the model using the following command: ```bash mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300 ``` You can then prompt it with anything you'd like. ### Transformers To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) ` mistral-common >= 1.5.5` to use our tokenizer. ```bash pip install mistral-common --upgrade ``` Then load our tokenizer along with the model and generate: ```python import torch from mistral_common.protocol.instruct.messages import ( SystemMessage, UserMessage ) from mistral_common.protocol.instruct.request import ChatCompletionRequest from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy from huggingface_hub import hf_hub_download from transformers import AutoModelForCausalLM def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Devstral-Small-2505" tekken_file = hf_hub_download(repo_id=model_id, filename="tekken.json") SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") tokenizer = MistralTokenizer.from_file(tekken_file) model = AutoModelForCausalLM.from_pretrained(model_id) tokenized = tokenizer.encode_chat_completion( ChatCompletionRequest( messages=[ SystemMessage(content=SYSTEM_PROMPT), UserMessage(content="<your-command>"), ], ) ) output = model.generate( input_ids=torch.tensor([tokenized.tokens]), max_new_tokens=1000, )[0] decoded_output = tokenizer.decode(output[len(tokenized.tokens):]) print(decoded_output) ``` ### LMStudio Download the weights from huggingface: ``` pip install -U "huggingface_hub[cli]" huggingface-cli download \ "mistralai/Devstral-Small-2505_gguf" \ --include "devstralQ4_K_M.gguf" \ --local-dir "mistralai/Devstral-Small-2505_gguf/" ``` You can serve the model locally with [LMStudio](https://lmstudio.ai/). * Download [LM Studio](https://lmstudio.ai/) and install it * Install `lms cli ~/.lmstudio/bin/lms bootstrap` * In a bash terminal, run `lms import devstralQ4_K_M.gguf` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`) * Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting toggle Serve on Local Network to be on. * On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step. Launch Openhands You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker ```bash docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.38 ``` Click “see advanced setting” on the second line. In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes. ### Ollama You can run Devstral using the [Ollama](https://ollama.ai/) CLI. ```bash ollama run devstral ```
CompassioninMachineLearning/2alpacallama_plus1k_80_20mix
CompassioninMachineLearning
2025-05-21T16:12:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:CompassioninMachineLearning/10kalpaca_plus_llama31_8bInstruct", "base_model:finetune:CompassioninMachineLearning/10kalpaca_plus_llama31_8bInstruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T16:07:34Z
--- base_model: CompassioninMachineLearning/10kalpaca_plus_llama31_8bInstruct tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** CompassioninMachineLearning - **License:** apache-2.0 - **Finetuned from model :** CompassioninMachineLearning/10kalpaca_plus_llama31_8bInstruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unity/inference-engine-midas
unity
2025-05-21T16:11:08Z
23
10
unity-sentis
[ "unity-sentis", "onnx", "unity-inference-engine", "depth-estimation", "license:mit", "region:us" ]
depth-estimation
2024-01-09T22:48:29Z
--- license: mit library_name: unity-sentis pipeline_tag: depth-estimation tags: - unity-inference-engine --- ## MiDaS in Unity 6 with Inference Engine [MiDaS](https://pytorch.org/hub/intelisl_midas_v2/) is a depth estimation model that can be used in Unity using Inference Engine. ## How to Use Example source code to run this model can be found at: [[Source Code](https://github.com/Unity-Technologies/inference-engine-samples/tree/main/DepthEstimationSample) ![preview](preview.gif) ## Inference Engine Inference Engine is a neural network inference library for Unity. Find out more [here](https://docs.unity3d.com/Packages/com.unity.ai.inference@latest).
phospho-app/tictactoe-A1-orange-8010-cg6i06klm0
phospho-app
2025-05-21T16:11:00Z
0
0
null
[ "phosphobot", "gr00t", "region:us" ]
null
2025-05-21T16:08:40Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Traceback (most recent call last): File "/root/src/helper.py", line 229, in predict trainer.train(timeout_seconds=timeout_seconds) File "/root/phosphobot/am/gr00t.py", line 1067, in train asyncio.run( File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/root/phosphobot/am/gr00t.py", line 967, in run_gr00t_training raise RuntimeError(error_msg) RuntimeError: Training process failed with exit code 1: return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 311, in forward down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x)) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 224.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 98.75 MiB is free. Process 65 has 79.15 GiB memory in use. Of the allocated memory 76.81 GiB is allocated by PyTorch, and 1.83 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) 0%| | 1/2490 [00:11<7:49:00, 11.31s/it] The current batch size is too large for the GPU. Please consider lowering it to fit in the memory. We train on a 80GB A100 GPU. ``` ## Training parameters: - **Dataset**: [PAphospho/tictactoe-A1-orange](https://huggingface.co/datasets/PAphospho/tictactoe-A1-orange) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 80 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
Oussama09D/model_full
Oussama09D
2025-05-21T16:10:50Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T16:08:03Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
songthienll/mobilevit_ai_real_classifier
songthienll
2025-05-21T16:10:24Z
0
0
transformers
[ "transformers", "safetensors", "mobilevit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:apple/mobilevit-small", "base_model:finetune:apple/mobilevit-small", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-21T15:35:25Z
--- library_name: transformers license: other base_model: apple/mobilevit-small tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: mobilevit_ai_real_classifier results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.93875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilevit_ai_real_classifier This model is a fine-tuned version of [apple/mobilevit-small](https://huggingface.co/apple/mobilevit-small) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1560 - Accuracy: 0.9387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2392 | 1.0 | 219 | 0.2392 | 0.9115 | | 0.1804 | 2.0 | 438 | 0.1905 | 0.9295 | | 0.1611 | 3.0 | 657 | 0.1772 | 0.935 | | 0.1225 | 4.0 | 876 | 0.1784 | 0.938 | | 0.0955 | 5.0 | 1095 | 0.2000 | 0.9325 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
DanielNRU/pollen-ner-1450
DanielNRU
2025-05-21T16:08:09Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-21T16:00:06Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-1450 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-1450 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1299 - Precision: 0.8752 - Recall: 0.9297 - F1: 0.9017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 182 | 0.1380 | 0.8652 | 0.9277 | 0.8953 | | No log | 2.0 | 364 | 0.1278 | 0.8776 | 0.9217 | 0.8991 | | 0.2279 | 3.0 | 546 | 0.1265 | 0.8795 | 0.9237 | 0.9011 | | 0.2279 | 4.0 | 728 | 0.1271 | 0.8762 | 0.9237 | 0.8993 | | 0.2279 | 5.0 | 910 | 0.1299 | 0.8752 | 0.9297 | 0.9017 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
tabularisai/Zip-1
tabularisai
2025-05-21T16:07:41Z
14
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-20T11:39:38Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction base_model: sentence-transformers/all-MiniLM-L6-v2 pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: Unknown type: unknown metrics: - type: pearson_cosine value: 0.6751697498221416 name: Pearson Cosine - type: spearman_cosine value: 0.7044137530273638 name: Spearman Cosine --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 32-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 32 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 32, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 32] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6752 | | **spearman_cosine** | **0.7044** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Logs | Epoch | Step | spearman_cosine | |:-----:|:----:|:---------------:| | -1 | -1 | 0.7044 | ### Framework Versions - Python: 3.10.10 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.7.0+cu128 - Accelerate: - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
vmpsergio/ea7a49bf-1511-4860-9cec-5955357f10b0
vmpsergio
2025-05-21T16:07:34Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:quantized:unsloth/Phi-3.5-mini-instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T15:22:48Z
--- base_model: unsloth/Phi-3.5-mini-instruct library_name: transformers model_name: ea7a49bf-1511-4860-9cec-5955357f10b0 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for ea7a49bf-1511-4860-9cec-5955357f10b0 This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vmpsergio/ea7a49bf-1511-4860-9cec-5955357f10b0", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/krgynw45) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
UmeAiRT/ComfyUI-Auto_installer
UmeAiRT
2025-05-21T16:07:24Z
12,997
27
diffusers
[ "diffusers", "onnx", "safetensors", "gguf", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-09-26T13:03:26Z
--- license: mit --- # UmeAiRT - ComfyUI auto installer I'm sharing with you my installation script, which automatically provides ComfyUI, workflows, models, custom nodes ... Just run "ComfyUI-AllinOne-Auto_install.bat". With a few questions at the beginning of the script, only the desired elements will be downloaded. ### Prerequisites : - [7zip](others/7z2409-x64.exe) - [git](others/Git-2.49.0-64-bit.exe) - [CUDA 12.8](others/cuda_12.8.1_windows_network.exe) ### What's included : #### ComfyUI : - ComfyUI portable version pytorch 2.7.0+cu128 - ComfyUI Manager - Interface settings - Xformers - Nvidia Apex - Sageattention - Triton #### Workflow : - TXT to IMG - IMG to IMG - INPAINT - OUTPAINT - PulID & REDUX - ControlNet HED/Canny/Openpose/Depth - TXT to VIDEO - IMG to VIDEO - StartEndFrames - Face to VIDEO - VIDEO EXTENSION - VIDEO to LOOP - Frames interpolations - Upscaler - Video merge #### WAN2.1 : - T2V Model - I2V Model - T2V GGUF Model - I2V GGUF Model - CLIP - CLIP Vision - VAE #### Flux1 : - flux1-dev - flux1-schnell-fp8 - GGUF - clip_l - t5xxl - VAE - ControlNet HED/Canny/Openpose/Depth ### Upscale Model : - RealESRGAN_x4plus.pth - RealESRGAN_x4plus_anime_6B.pth ### Custom Nodes : - ComfyUI-Custom-Scripts - ComfyUI-GGUF - ComfyUI-KJNodes - ComfyUI-VideoHelperSuite - ComfyUI-mxToolkit - ComfyUI-HunyuanVideoMultiLora - rgthree-comfy - ComfyUI-Frame-Interpolation - ComfyUI Impact Pack - ComfyUI-Easy-Use - ComfyUI_PuLID_Flux_ll - WAS Node Suite - ComfyUI-Florence2 - ComfyUI-Upscaler-Tensorrt - ComfyUI-MultiGPU - ComfyUI-WanStartEndFramesNative ![alt text][logo] [logo]: images/UmeAiRT.png "UmeAiRT logo"
unity/inference-engine-minilm-v6
unity
2025-05-21T16:04:17Z
11
3
unity-sentis
[ "unity-sentis", "onnx", "unity-inference-engine", "sentence-similarity", "license:apache-2.0", "region:us" ]
sentence-similarity
2024-01-14T06:03:59Z
--- license: apache-2.0 library_name: unity-sentis pipeline_tag: sentence-similarity tags: - unity-inference-engine --- # Mini LM in Unity 6 with Inference Engine This is the [Mini LM v6 model](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model running in Unity 6 with Inference Engine. Mini LM is a sentence similarity model that compares different sentences and gives a score depending on how similar they are. ## How to Use * Create a new scene in Unity 6; * Install `com.unity.ai.inference` from the package manager; * Add the `RunMiniLM.cs` script to the Main Camera; * Drag the `MiniLMv6.onnx` asset from the `models` folder into the `Model Asset` field; * Drag the `vocab.txt` asset from the `data` folder into the `Vocab Asset` field; ## Preview Enter play mode. If working correctly the sentence similarity score will be logged to the console. ## Inference Engine Inference Engine is a neural network inference library for Unity. Find out more [here](https://docs.unity3d.com/Packages/com.unity.ai.inference@latest).
ZorroOTW/q-FrozenLake-v1-4x4-noSlippery
ZorroOTW
2025-05-21T16:04:01Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-05-21T16:03:58Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ZorroOTW/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
phospho-app/tictactoe-A1-orange-12815-lkstjhx4uf
phospho-app
2025-05-21T16:03:50Z
0
0
null
[ "phosphobot", "gr00t", "region:us" ]
null
2025-05-21T16:01:59Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Traceback (most recent call last): File "/root/src/helper.py", line 229, in predict trainer.train(timeout_seconds=timeout_seconds) File "/root/phosphobot/am/gr00t.py", line 1067, in train asyncio.run( File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/root/phosphobot/am/gr00t.py", line 967, in run_gr00t_training raise RuntimeError(error_msg) RuntimeError: Training process failed with exit code 1: return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/normalization.py", line 217, in forward return F.layer_norm( ^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/torch/nn/functional.py", line 2900, in layer_norm return torch.layer_norm( ^^^^^^^^^^^^^^^^^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 24.75 MiB is free. Process 548102 has 79.22 GiB memory in use. Of the allocated memory 78.46 GiB is allocated by PyTorch, and 266.39 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) 0%| | 0/2340 [00:06<?, ?it/s] The current batch size is too large for the GPU. Please consider lowering it to fit in the memory. We train on a 80GB A100 GPU. ``` ## Training parameters: - **Dataset**: [PAphospho/tictactoe-A1-orange](https://huggingface.co/datasets/PAphospho/tictactoe-A1-orange) - **Wandb run URL**: None - **Epochs**: 15 - **Batch size**: 128 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
thefaftek/unsloth_test
thefaftek
2025-05-21T16:03:10Z
45
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-19T06:34:20Z
--- base_model: unsloth/qwen3-4b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thefaftek - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bigfish951/wav2vec2-base-timit-demo-colab
bigfish951
2025-05-21T16:02:47Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-21T16:02:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hlhs211/aphasia_gemma3_12b
hlhs211
2025-05-21T15:59:27Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "gemma3", "en", "base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "base_model:quantized:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-21T15:54:18Z
--- base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** hlhs211 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
dimasik2987/5ff2891e-96aa-4d0d-99bb-00857d4e117f
dimasik2987
2025-05-21T15:58:36Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:quantized:unsloth/Phi-3.5-mini-instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T15:22:27Z
--- base_model: unsloth/Phi-3.5-mini-instruct library_name: transformers model_name: 5ff2891e-96aa-4d0d-99bb-00857d4e117f tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 5ff2891e-96aa-4d0d-99bb-00857d4e117f This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dimasik2987/5ff2891e-96aa-4d0d-99bb-00857d4e117f", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/givzxy09) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Abuzaid01/Ai_Human_Text_Detector
Abuzaid01
2025-05-21T15:58:27Z
0
0
null
[ "safetensors", "roberta", "text-classification", "ai-text-detection", "en", "dataset:custom", "license:mit", "region:us" ]
text-classification
2025-05-21T14:22:16Z
--- language: en license: mit library: pytorch datasets: - custom tags: - text-classification - ai-text-detection - roberta widget: - text: "The impact of artificial intelligence on modern society has been profound and far-reaching, transforming industries and reshaping how we live and work." - text: "The quantum mechanics principle demonstrates that particles can exist in multiple states simultaneously until observed, a phenomenon known as superposition." --- # AI vs Human Text Detector This model can detect whether a text was written by a human or generated by AI. ## Model description This AI text detector is built by fine-tuning RoBERTa-base on a dataset containing both human-written and AI-generated text samples. The model has been trained with data augmentation techniques to improve its robustness. ## Performance The model achieves the following performance on the validation set: - Accuracy: 0.9999 - F1-Score (Human): 1.0000 - F1-Score (AI): 0.9999 ## How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch # Load model and tokenizer model_name = "Abuzaid01/Ai_Human_Text_Detector" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) # Prepare text for classification text = "Your text to classify goes here." inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512, padding=True) # Run inference with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits # Get the predicted class and probabilities probabilities = torch.nn.functional.softmax(logits, dim=1) predicted_class_idx = torch.argmax(probabilities, dim=1).item() confidence = probabilities[0][predicted_class_idx].item() # Map class index to label labels = ["Human-written", "AI-generated"] predicted_label = labels[predicted_class_idx] print(f"Prediction: {predicted_label}") print(f"Confidence: {confidence:.4f}") ```
Aderfish/MNLP_M2_quantized_model
Aderfish
2025-05-21T15:58:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T15:57:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/reality-pony-xl-v10-sdxl
John6666
2025-05-21T15:53:09Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "reality", "game", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-05-21T15:47:18Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - reality - game - pony --- Original model is [here](https://civitai.com/models/1605047/realityponyxl?modelVersionId=1816307). This model created by [dgracey01300](https://civitai.com/user/dgracey01300).
amithsourya/Script-Generate-4GL-V1.0
amithsourya
2025-05-21T15:53:08Z
0
0
transformers
[ "transformers", "safetensors", "OUAF", "Oracle", "4GL", "en", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-05-21T13:19:58Z
--- library_name: transformers tags: - OUAF - Oracle - 4GL license: mit language: - en base_model: - meta-llama/Llama-3.2-1B-Instruct --- ### Model Description Generate 4GL Scripts from english prompts - **Developed by:** Amith Sourya Sadineni - **Model type:** Text Generation - **Language(s):** Python - **License:** MIT - **Finetuned from model:** meta-llama/Llama-3.2-1B-Instruct ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://huggingface.co/amithsourya/Script-Generate-4GL-V1.0/blob/main/adapter_model.safetensors - **Demo:** ```python from huggingface_hub import notebook_login notebook_login() from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline from peft import PeftModel, PeftConfig lora_path = "amithsourya/Script-Generate-4GL-V1.0" peft_config = PeftConfig.from_pretrained(lora_path) base_model = AutoModelForCausalLM.from_pretrained( peft_config.base_model_name_or_path, device_map="auto", torch_dtype="auto" ) model = PeftModel.from_pretrained(base_model, lora_path) tokenizer = AutoTokenizer.from_pretrained(peft_config.base_model_name_or_path) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map="auto") prompt = "invoke a BO for read" outputs = pipe(prompt, max_new_tokens=256) print(outputs[0]["generated_text"]) ``` ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** T4 GPU - **Hours used:** 0H:23M ## Example ![image/png](https://cdn-uploads.huggingface.co/production/uploads/682b328fb814376780257a17/aaz_ESL50FOX-KLK4Xb8f.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/682b328fb814376780257a17/X4ZMrUHPVEy7-zI0rm5LL.png)
TaoTao0216/DreamForge-DiT
TaoTao0216
2025-05-21T15:52:16Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-21T15:12:04Z
--- license: apache-2.0 ---
iancu003/climateSentiment2
iancu003
2025-05-21T15:51:06Z
0
0
fastai
[ "fastai", "region:us" ]
null
2025-05-21T15:50:50Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
cjlaguardia/rodrigo_duterte
cjlaguardia
2025-05-21T15:49:57Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-05-21T15:49:49Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/ComfyUI_temp_uknup_00018_.png - text: '-' output: url: images/ComfyUI_temp_uknup_00016_.png - text: '-' output: url: images/ComfyUI_temp_uknup_00015_.png - text: '-' output: url: images/ComfyUI_temp_uknup_00014_.png - text: '-' output: url: images/ComfyUI_temp_uknup_00013_.png - text: '-' output: url: images/ComfyUI_temp_uknup_00010_.png - text: '-' output: url: images/ComfyUI_temp_uknup_00009_.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: Rodrigo_Duterte license: apache-2.0 --- # Rodrigo_Duterte <Gallery /> ## Model description Trained using 20 image dataset, 10 repeat training per images, 15 epochs, 3000 steps ## Trigger words You should use `Rodrigo_Duterte` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/cjlaguardia/rodrigo_duterte/tree/main) them in the Files & versions tab.
unity/inference-engine-blaze-face
unity
2025-05-21T15:49:43Z
10
13
unity-sentis
[ "unity-sentis", "onnx", "unity-inference-engine", "object-detection", "license:apache-2.0", "region:us" ]
object-detection
2024-01-12T23:34:30Z
--- license: apache-2.0 library_name: unity-sentis pipeline_tag: object-detection tags: - unity-inference-engine --- ## BlazeFace in Unity 6 with Inference Engine BlazePose is a fast, light-weight pose detector from Google Research that can be used in Unity using Inference Engine. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/face_detector) framework. ## How to Use Example source code to run this model can be found at: [[Source Code](https://github.com/Unity-Technologies/inference-engine-samples/tree/main/BlazeDetectionSample/Pose) ![preview](./images/face.jpg) ## Inference Engine Inference Engine is a neural network inference library for Unity. Find out more [here](https://docs.unity3d.com/Packages/com.unity.ai.inference@latest).
unity/inference-engine-blaze-pose
unity
2025-05-21T15:49:18Z
0
3
unity-sentis
[ "unity-sentis", "onnx", "unity-inference-engine", "object-detection", "license:apache-2.0", "region:us" ]
object-detection
2024-09-16T14:25:42Z
--- license: apache-2.0 library_name: unity-sentis pipeline_tag: object-detection tags: - unity-inference-engine --- ## BlazePose in Unity 6 with Inference Engine BlazePose is a fast, light-weight pose detector from Google Research that can be used in Unity using Inference Engine. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker) framework. ## How to Use Example source code to run this model can be found at: [[Source Code](https://github.com/Unity-Technologies/inference-engine-samples/tree/main/BlazeDetectionSample/Pose) ![preview](./images/pose.jpg) ## Inference Engine Inference Engine is a neural network inference library for Unity. Find out more [here](https://docs.unity3d.com/Packages/com.unity.ai.inference@latest).
teddy19901/llama3-liora
teddy19901
2025-05-21T15:47:29Z
0
0
null
[ "safetensors", "llama", "region:us" ]
null
2025-05-21T15:05:43Z
license: apache-2.0 language: de tags: * liora * empathy * llama3 * fine-tuned * mental-health --- # 🧠 LIORA – Empathisches LLaMA3-Modell (Feintuning) Dies ist eine feinjustierte Version von `Meta-Llama-3-8B-Instruct`, trainiert auf deutschsprachige emotionale Eingaben. ## 🔥 Beispielnutzung ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(hf_repo) tokenizer = AutoTokenizer.from_pretrained(hf_repo) prompt = "Ich fühle mich heute sehr einsam." inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=50) decoded = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(decoded[0]) ``` """
Oussama09D/model_lora
Oussama09D
2025-05-21T15:47:25Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-21T15:44:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VideoS-18k-katrina-lim-kiffy-katrinalim123/Update.portal.zacarias.gotinha.iml.portal.zacarias.gotinha.no.iml.Full.Video
VideoS-18k-katrina-lim-kiffy-katrinalim123
2025-05-21T15:46:47Z
0
0
null
[ "region:us" ]
null
2025-05-21T15:41:07Z
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?katrina-lim) [►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️​](https://videohere.top/?katrina-lim) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?katrina-lim)
Damrongbou/kcpx1-1_model
Damrongbou
2025-05-21T15:43:09Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-21T15:42:37Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Damrongbou - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vermoney/4d5ee455-0dc6-4d1a-bde1-b347e7c36a11
vermoney
2025-05-21T15:42:46Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:quantized:unsloth/Phi-3.5-mini-instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T15:23:45Z
--- base_model: unsloth/Phi-3.5-mini-instruct library_name: transformers model_name: 4d5ee455-0dc6-4d1a-bde1-b347e7c36a11 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 4d5ee455-0dc6-4d1a-bde1-b347e7c36a11 This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vermoney/4d5ee455-0dc6-4d1a-bde1-b347e7c36a11", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-9/runs/jc3l10op) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Abhishek4545/distilbert-base-uncased-finetuned-imdb
Abhishek4545
2025-05-21T15:42:32Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-21T15:35:11Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4894 - Model Preparation Time: 0.0029 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | |:-------------:|:-----:|:----:|:---------------:|:----------------------:| | 2.6838 | 1.0 | 157 | 2.5094 | 0.0029 | | 2.5878 | 2.0 | 314 | 2.4502 | 0.0029 | | 2.5279 | 3.0 | 471 | 2.4819 | 0.0029 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
unity/inference-engine-othello
unity
2025-05-21T15:42:30Z
11
2
unity-sentis
[ "unity-sentis", "onnx", "unity-inference-engine", "reinforcement-learning", "license:mit", "region:us" ]
reinforcement-learning
2024-01-10T03:27:48Z
--- license: mit library_name: unity-sentis pipeline_tag: reinforcement-learning tags: - unity-inference-engine --- ## Othello game playing model in Unity 6 with Inference Engine This is an Othello game playing model based on a modified version of Alpha Go called [Alpha Zero General](https://github.com/suragnair/alpha-zero-general) ## How to Use Example source code to run this model can be found at: [Source Code](https://github.com/Unity-Technologies/inference-engine-samples/tree/main/BoardGameAISample) ![preview](othello-preview.png) ## Inference Engine Inference Engine is a neural network inference library for Unity. Find out more [here](https://docs.unity3d.com/Packages/com.unity.ai.inference@latest).
John6666/omnimuse35-v2-sdxl
John6666
2025-05-21T15:41:45Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "furry", "semi-realistic", "stylized aesthetics", "2D", "2.5D", "toon shading", "background", "prompt following", "merge", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-XL-v1.0", "base_model:finetune:OnomaAIResearch/Illustrious-XL-v1.0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-05-21T15:36:24Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - furry - semi-realistic - stylized aesthetics - 2D - 2.5D - toon shading - background - prompt following - merge - illustrious base_model: OnomaAIResearch/Illustrious-XL-v1.0 --- Original model is [here](https://civitai.com/models/1560969?modelVersionId=1816197). This model created by [Mrskel4](https://civitai.com/user/Mrskel4).
kkioikk/Seed-Coder-8B-Instruct-Q5_K_M-GGUF
kkioikk
2025-05-21T15:40:51Z
37
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:ByteDance-Seed/Seed-Coder-8B-Instruct", "base_model:quantized:ByteDance-Seed/Seed-Coder-8B-Instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-13T10:56:23Z
--- base_model: ByteDance-Seed/Seed-Coder-8B-Instruct library_name: transformers license: mit pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # kkioikk/Seed-Coder-8B-Instruct-Q5_K_M-GGUF This model was converted to GGUF format from [`ByteDance-Seed/Seed-Coder-8B-Instruct`](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) for more details on the model. 小参数模型使用注意事项: 尽量使用简洁的系统指令,系统指令优先于即时输入指令。 当无法遵守指令时,可通过修改历史记录的方法,例如在开始时预设少量对话记录:{user},{assistant},直接修改或替换模型输出作为历史记录,这样可以让模型更好地学习你的要求。 Precautions for using small-parameter models: Try to use concise system instructions, system instructions take precedence over real-time input instructions. When you can't follow the instructions, you can modify the history by modifying the history, such as presetting a small number of conversation records at the beginning: {user}, {assistant}, directly modify or replace the model output as the history, which can make the model learn your requirements better.
DanielNRU/pollen-ner-1250
DanielNRU
2025-05-21T15:40:29Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-20T11:29:05Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-1250 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-1250 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1455 - Precision: 0.8614 - Recall: 0.9237 - F1: 0.8915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 157 | 0.1455 | 0.8614 | 0.9237 | 0.8915 | | No log | 2.0 | 314 | 0.1406 | 0.8625 | 0.9197 | 0.8902 | | No log | 3.0 | 471 | 0.1420 | 0.8596 | 0.9217 | 0.8895 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
Nimra-Mehra-New-Video/nimra.mehra.jobz.hunting.nimra.video.nimra.mehra.new.video
Nimra-Mehra-New-Video
2025-05-21T15:39:56Z
0
0
null
[ "region:us" ]
null
2025-05-21T15:39:26Z
<a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p> <a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a></p> <p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v=xxx"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
kokovova/da5dbbbf-2d75-4451-ba84-1694ec873b0b
kokovova
2025-05-21T15:39:21Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:quantized:unsloth/Phi-3.5-mini-instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T15:22:41Z
--- base_model: unsloth/Phi-3.5-mini-instruct library_name: transformers model_name: da5dbbbf-2d75-4451-ba84-1694ec873b0b tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for da5dbbbf-2d75-4451-ba84-1694ec873b0b This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kokovova/da5dbbbf-2d75-4451-ba84-1694ec873b0b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/2cpbi5di) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ycy17/Qwen2.5-1.5B-Open-R1-Distill
ycy17
2025-05-21T15:38:40Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:open-r1/OpenR1-Math-220k", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-28T12:11:51Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct datasets: open-r1/OpenR1-Math-220k library_name: transformers model_name: Qwen2.5-1.5B-Open-R1-Distill tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen2.5-1.5B-Open-R1-Distill This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ycy17/Qwen2.5-1.5B-Open-R1-Distill", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yingcy17/huggingface/runs/01ru4o67) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yen-do-2001-huyen-cheng-2004-lo/Lien.Ket.Video.18.yen.do.2001.huyen.cheng.2004.lo.clip.huyen.trang.2004.clip
yen-do-2001-huyen-cheng-2004-lo
2025-05-21T15:38:13Z
0
0
null
[ "region:us" ]
null
2025-05-21T15:36:38Z
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=yen-do-2001-huyen-cheng-2004-lo) [🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=yen-do-2001-huyen-cheng-2004-lo) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=yen-do-2001-huyen-cheng-2004-lo)
ggml-org/gemma-3-12b-it-GGUF
ggml-org
2025-05-21T15:38:00Z
7,050
22
null
[ "gguf", "image-text-to-text", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:1903.00161", "arxiv:2009.03300", "arxiv:2304.06364", "arxiv:2103.03874", "arxiv:2110.14168", "arxiv:2311.12022", "arxiv:2108.07732", "arxiv:2107.03374", "arxiv:2210.03057", "arxiv:2106.03193", "arxiv:1910.11856", "arxiv:2502.12404", "arxiv:2502.21228", "arxiv:2404.16816", "arxiv:2104.12756", "arxiv:2311.16502", "arxiv:2203.10244", "arxiv:2404.12390", "arxiv:1810.12440", "arxiv:1908.02660", "arxiv:2312.11805", "base_model:google/gemma-3-12b-it", "base_model:quantized:google/gemma-3-12b-it", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-03-12T06:28:34Z
--- license: gemma base_model: - google/gemma-3-12b-it pipeline_tag: image-text-to-text --- # Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context of 8192 tokens ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://goo.gle/Gemma3Report}, publisher={Kaggle}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and 1B with 2 trillion tokens. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: #### Reasoning and factuality | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 #### STEM and code | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 #### Multilingual | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 #### Multimodal | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://goo.gle/Gemma3Report [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
kyars/llama-host
kyars
2025-05-21T15:37:50Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-21T15:37:42Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kyars - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unity/inference-engine-mobilenet-v2
unity
2025-05-21T15:37:34Z
16
0
unity-sentis
[ "unity-sentis", "onnx", "unity-inference-engine", "image-classification", "arxiv:1801.04381", "license:mit", "region:us" ]
image-classification
2024-01-10T03:07:09Z
--- license: mit library_name: unity-sentis pipeline_tag: image-classification tags: - unity-inference-engine --- # MobileNet V2 in Unity 6 with Inference Engine This is the [MobileNet V2](https://arxiv.org/abs/1801.04381) model running in Unity 6 with Inference Engine. This is a small image classification model. ## How to Use * Create a new scene in Unity 6; * Install `com.unity.ai.inference` from the package manager; * Add the `RunMobileNet.cs` script to the Main Camera; * Drag the `mobilenet_v2.onnx` asset from the `models` folder into the `Model Asset` field; * Drag the `class_desc.txt` asset from the `data` folder into the `Labels Asset` field; * Drag an image, such as `Bee.jpg` asset from the `images` folder into the `Input Image` field; ## Preview Enter play mode. If working correctly the predicted class will be logged to the console. ## Inference Engine Inference Engine is a neural network inference library for Unity. Find out more [here](https://docs.unity3d.com/Packages/com.unity.ai.inference@latest).
GracieH301/3Bmodel
GracieH301
2025-05-21T15:36:48Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T15:31:29Z
--- base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** GracieH301 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
John6666/illustrius-anime-mix-v10-sdxl
John6666
2025-05-21T15:36:22Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "reality", "game", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-05-21T15:31:10Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - reality - game - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1604236/illustriusanimemix). This model created by [dgracey01300](https://civitai.com/user/dgracey01300).
DanielNRU/pollen-ner-1200
DanielNRU
2025-05-21T15:35:56Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-19T14:43:23Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-1200 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-1200 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1386 - Precision: 0.8707 - Recall: 0.9197 - F1: 0.8945 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 150 | 0.1474 | 0.8539 | 0.9157 | 0.8837 | | No log | 2.0 | 300 | 0.1401 | 0.8604 | 0.9157 | 0.8872 | | No log | 3.0 | 450 | 0.1451 | 0.8492 | 0.9157 | 0.8812 | | 0.2697 | 4.0 | 600 | 0.1409 | 0.8606 | 0.9177 | 0.8882 | | 0.2697 | 5.0 | 750 | 0.1438 | 0.8555 | 0.9157 | 0.8846 | | 0.2697 | 6.0 | 900 | 0.1404 | 0.8674 | 0.9197 | 0.8928 | | 0.2583 | 7.0 | 1050 | 0.1429 | 0.8655 | 0.9177 | 0.8908 | | 0.2583 | 8.0 | 1200 | 0.1386 | 0.8707 | 0.9197 | 0.8945 | | 0.2583 | 9.0 | 1350 | 0.1403 | 0.8677 | 0.9217 | 0.8939 | | 0.2493 | 10.0 | 1500 | 0.1409 | 0.8642 | 0.9197 | 0.8911 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
kitz234/Byxe
kitz234
2025-05-21T15:33:09Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-21T15:33:09Z
--- license: apache-2.0 ---
Arijit20200204067/gemma3_12B_finetune
Arijit20200204067
2025-05-21T15:31:55Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "gemma3", "conversational", "en", "base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T15:23:27Z
--- base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Arijit20200204067 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Nimra-Mehra-New-Video/Link.Video.18.jobz.hunting.nimra.video.nimra.mehra.video.nimra.mehra
Nimra-Mehra-New-Video
2025-05-21T15:31:52Z
0
0
null
[ "region:us" ]
null
2025-05-21T15:30:43Z
18 seconds ago <a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p> <a href="https://tv2online.com/Video/?v=xxx" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a></p> <p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Video/?v=xxx"><img border="Viral+Leaked+Video" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
duyntnet/Llama-3.1-Nemotron-Nano-4B-v1.1-imatrix-GGUF
duyntnet
2025-05-21T15:31:19Z
0
0
transformers
[ "transformers", "gguf", "imatrix", "Llama-3.1-Nemotron-Nano-4B-v1.1", "text-generation", "en", "arxiv:2408.11796", "license:other", "region:us", "conversational" ]
text-generation
2025-05-21T14:54:41Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - Llama-3.1-Nemotron-Nano-4B-v1.1 --- Quantizations of https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 ### Open source inference clients/UIs * [llama.cpp](https://github.com/ggerganov/llama.cpp) * [KoboldCPP](https://github.com/LostRuins/koboldcpp) * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [ollama](https://github.com/ollama/ollama) * [jan](https://github.com/janhq/jan) ### Closed source inference clients/UIs * [LM Studio](https://lmstudio.ai/) * [Backyard AI](https://backyard.ai/) * More will be added... --- # From original readme Llama-3.1-Nemotron-Nano-4B-v1.1 is a large language model (LLM) which is a derivative of [nvidia/Llama-3.1-Minitron-4B-Width-Base](https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base), which is created from Llama 3.1 8B using [our LLM compression technique](https://arxiv.org/abs/2408.11796) and offers improvements in model accuracy and efficiency. It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling. Llama-3.1-Nemotron-Nano-4B-v1.1 is a model which offers a great tradeoff between model accuracy and efficiency. The model fits on a single RTX GPU and can be used locally. The model supports a context length of 128K. This model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling as well as multiple reinforcement learning (RL) stages using Reward-aware Preference Optimization (RPO) algorithms for both chat and instruction-following. The final model checkpoint is obtained after merging the final SFT and RPO checkpoints This model is part of the Llama Nemotron Collection. You can find the other model(s) in this family here: - [Llama-3.3-Nemotron-Ultra-253B-v1](https://huggingface.co/nvidia/Llama-3_1-Nemotron-Ultra-253B-v1) - [Llama-3.3-Nemotron-Super-49B-v1](https://huggingface.co/nvidia/Llama-3.3-Nemotron-Super-49B-v1) - [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) This model is ready for commercial use. ## Quick Start and Usage Recommendations: 1. Reasoning mode (ON/OFF) is controlled via the system prompt, which must be set as shown in the example below. All instructions should be contained within the user prompt 2. We recommend setting temperature to `0.6`, and Top P to `0.95` for Reasoning ON mode 3. We recommend using greedy decoding for Reasoning OFF mode 4. We have provided a list of prompts to use for evaluation for each benchmark where a specific template is required See the snippet below for usage with Hugging Face Transformers library. Reasoning mode (ON/OFF) is controlled via system prompt. Please see the example below. Our code requires the transformers package version to be `4.44.2` or higher. ### Example of “Reasoning On:” ```python import torch import transformers model_id = "nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1" model_kwargs = {"torch_dtype": torch.bfloat16, "device_map": "auto"} tokenizer = transformers.AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token_id = tokenizer.eos_token_id pipeline = transformers.pipeline( "text-generation", model=model_id, tokenizer=tokenizer, max_new_tokens=32768, temperature=0.6, top_p=0.95, **model_kwargs ) # Thinking can be "on" or "off" thinking = "on" print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"}, {"role": "user", "content": "Solve x*(sin(x)+2)=0"}])) ``` ### Example of “Reasoning Off:” ```python import torch import transformers model_id = "nvidia/Llama-3.1-Nemotron-Nano-4B-v1" model_kwargs = {"torch_dtype": torch.bfloat16, "device_map": "auto"} tokenizer = transformers.AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token_id = tokenizer.eos_token_id pipeline = transformers.pipeline( "text-generation", model=model_id, tokenizer=tokenizer, max_new_tokens=32768, do_sample=False, **model_kwargs ) # Thinking can be "on" or "off" thinking = "off" print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"}, {"role": "user", "content": "Solve x*(sin(x)+2)=0"}])) ``` For some prompts, even though thinking is disabled, the model emergently prefers to think before responding. But if desired, the users can prevent it by pre-filling the assistant response. ```python import torch import transformers model_id = "nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1" model_kwargs = {"torch_dtype": torch.bfloat16, "device_map": "auto"} tokenizer = transformers.AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token_id = tokenizer.eos_token_id # Thinking can be "on" or "off" thinking = "off" pipeline = transformers.pipeline( "text-generation", model=model_id, tokenizer=tokenizer, max_new_tokens=32768, do_sample=False, **model_kwargs ) print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"}, {"role": "user", "content": "Solve x*(sin(x)+2)=0"}, {"role":"assistant", "content":"<think>\n</think>"}])) ``` ## Running a vLLM Server with Tool-call Support Llama-3.1-Nemotron-Nano-4B-v1.1 supports tool calling. This HF repo hosts a tool-callilng parser as well as a chat template in Jinja, which can be used to launch a vLLM server. Here is a shell script example to launch a vLLM server with tool-call support. `vllm/vllm-openai:v0.6.6` or newer should support the model. ```shell #!/bin/bash CWD=$(pwd) PORT=5000 git clone https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 docker run -it --rm \ --runtime=nvidia \ --gpus all \ --shm-size=16GB \ -p ${PORT}:${PORT} \ -v ${CWD}:${CWD} \ vllm/vllm-openai:v0.6.6 \ --model $CWD/Llama-3.1-Nemotron-Nano-4B-v1.1 \ --trust-remote-code \ --seed 1 \ --host "0.0.0.0" \ --port $PORT \ --served-model-name "Llama-Nemotron-Nano-4B-v1.1" \ --tensor-parallel-size 1 \ --max-model-len 131072 \ --gpu-memory-utilization 0.95 \ --enforce-eager \ --enable-auto-tool-choice \ --tool-parser-plugin "${CWD}/Llama-3.1-Nemotron-Nano-4B-v1.1/llama_nemotron_nano_toolcall_parser.py" \ --tool-call-parser "llama_nemotron_json" \ --chat-template "${CWD}/Llama-3.1-Nemotron-Nano-4B-v1.1/llama_nemotron_nano_generic_tool_calling.jinja" ``` Alternatively, you can use a virtual environment to launch a vLLM server like below. ```console $ git clone https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 $ conda create -n vllm python=3.12 -y $ conda activate vllm $ python -m vllm.entrypoints.openai.api_server \ --model Llama-3.1-Nemotron-Nano-4B-v1.1 \ --trust-remote-code \ --seed 1 \ --host "0.0.0.0" \ --port 5000 \ --served-model-name "Llama-Nemotron-Nano-4B-v1.1" \ --tensor-parallel-size 1 \ --max-model-len 131072 \ --gpu-memory-utilization 0.95 \ --enforce-eager \ --enable-auto-tool-choice \ --tool-parser-plugin "Llama-3.1-Nemotron-Nano-4B-v1.1/llama_nemotron_nano_toolcall_parser.py" \ --tool-call-parser "llama_nemotron_json" \ --chat-template "Llama-3.1-Nemotron-Nano-4B-v1.1/llama_nemotron_nano_generic_tool_calling.jinja" ``` After launching a vLLM server, you can call the server with tool-call support using a Python script like below. ```python >>> from openai import OpenAI >>> client = OpenAI( base_url="http://0.0.0.0:5000/v1", api_key="dummy", ) >>> completion = client.chat.completions.create( model="Llama-Nemotron-Nano-v1.1", messages=[ {"role": "system", "content": "detailed thinking on"}, {"role": "user", "content": "My bill is $100. What will be the amount for 18% tip?"}, ], tools=[ {"type": "function", "function": {"name": "calculate_tip", "parameters": {"type": "object", "properties": {"bill_total": {"type": "integer", "description": "The total amount of the bill"}, "tip_percentage": {"type": "integer", "description": "The percentage of tip to be applied"}}, "required": ["bill_total", "tip_percentage"]}}}, {"type": "function", "function": {"name": "convert_currency", "parameters": {"type": "object", "properties": {"amount": {"type": "integer", "description": "The amount to be converted"}, "from_currency": {"type": "string", "description": "The currency code to convert from"}, "to_currency": {"type": "string", "description": "The currency code to convert to"}}, "required": ["from_currency", "amount", "to_currency"]}}}, ], ) >>> completion.choices[0].message.content '<think>\nOkay, let\'s see. The user has a bill of $100 and wants to know the amount of a 18% tip. So, I need to calculate the tip amount. The available tools include calculate_tip, which requires bill_total and tip_percentage. The parameters are both integers. The bill_total is 100, and the tip percentage is 18. So, the function should multiply 100 by 18% and return 18.0. But wait, maybe the user wants the total including the tip? The question says "the amount for 18% tip," which could be interpreted as the tip amount itself. Since the function is called calculate_tip, it\'s likely that it\'s designed to compute the tip, not the total. So, using calculate_tip with bill_total=100 and tip_percentage=18 should give the correct result. The other function, convert_currency, isn\'t relevant here. So, I should call calculate_tip with those values.\n</think>\n\n' >>> completion.choices[0].message.tool_calls [ChatCompletionMessageToolCall(id='chatcmpl-tool-2972d86817344edc9c1e0f9cd398e999', function=Function(arguments='{"bill_total": 100, "tip_percentage": 18}', name='calculate_tip'), type='function')] ```
barek2k2/bert_hipaa_sensitive_db_schema
barek2k2
2025-05-21T15:31:09Z
38
1
transformers
[ "transformers", "safetensors", "bert", "text-classification", "BERT", "HIPAA", "PHI", "LLM", "sensitive data", "classification", "healthcare", "mHealth Application", "cybersecurity", "database", "column name classifier", "data field classifier", "huggingface", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-02T00:46:19Z
--- language: en license: mit tags: - BERT - HIPAA - PHI - LLM - sensitive data - classification - healthcare - mHealth Application - cybersecurity - database - column name classifier - data field classifier - transformers - huggingface model-index: - name: LLM BERT Model for HIPAA-Sensitive Database Fields Classification results: [] --- # LLM BERT Model for HIPAA-Sensitive Database Fields Classification This repository hosts a fine-tuned BERT-base model that classifies database column names as either **PHI HIPAA-sensitive** (e.g., `birthDate`, `ssn`, `address`) or **non-sensitive** (e.g., `color`, `food`, `country`). Use this model for: - Masking PHI data fields before sharing database to avoid HIPAA compliance - Preprocessing before data anonymization - Identifying patient's sensitive data fields in a dataset before training an AI model - Enhancing security in healthcare and mHealth applications --- ## 🧠 Model Info - **Base Model**: `bert-base-uncased` - **Task**: Binary classification (PHI HIPAA Sensitive vs Non-sensitive) - **Trained On**: GAN generated Synthetic and real-world column name examples - **Framework**: Hugging Face Transformers - **Model URL**: [https://huggingface.co/barek2k2/bert_hipaa_sensitive_db_schema](https://huggingface.co/barek2k2/bert_hipaa_sensitive_db_schema) --- ## 🚀 Usage Example (End-to-End) ### 1. Install Requirements ```bash pip install torch transformers ``` ### 2. Example ```bash import torch from transformers import BertTokenizer, BertForSequenceClassification # Load model and tokenizer model = BertForSequenceClassification.from_pretrained("barek2k2/bert_hipaa_sensitive_db_schema") tokenizer = BertTokenizer.from_pretrained("barek2k2/bert_hipaa_sensitive_db_schema") model.eval() # Example column names texts = ["birthDate", "country", "jwtToken", "color"] # Tokenize input inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True, max_length=128) # Predict with torch.no_grad(): outputs = model(**inputs) predictions = torch.argmax(outputs.logits, dim=1) # Display results for text, pred in zip(texts, predictions): label = "Sensitive" if pred.item() == 1 else "Non-sensitive" print(f"{text}: {label}") ``` ### 3. Output ```bash birthDate: Sensitive country: Non-sensitive jwtToken: Sensitive color: Non-sensitive ``` In the healthcare industry, safeguarding sensitive patient data is of utmost importance, particularly when developing and maintaining software systems that involve database sharing. The Health Insurance Portability and Accountability Act (HIPAA) mandates strict regulations to ensure the privacy and security of Protected Health Information (PHI). Healthcare organizations must comply with these regulations to prevent unauthorized access, breaches, and potential legal consequences. However, ensuring HIPAA compliance becomes a complex challenge when databases are shared among multiple teams for debugging, development, and testing purposes. This research work proposes a novel approach that uses BERT based LLM for identifying sensitive database columns into the database schema in order to avoid PHI HIPAA violation. #### Disclaimer This LLM model is fine-tuned with synthetic dataset(~50K) and is provided for research and educational purposes only. Always verify compliance before using in production environments. --- ## 📊 Model Performance Analysis **Table 1: Changing hyperparameters and results** | Step | Learning Rate | Batch Size | Epoch | Weight Decay | Precision | Recall | F1 Score | Accuracy | |--------|---------------|------------|-------|---------------|-----------|--------|----------|----------| | 1 | 0 | 16 | 1 | 0.001 | 0.0000 | 0.0000 | 0.0000 | 36.78% | | 2 | 1e-1 | 16 | 1 | 0.001 | 0.6321 | 1.0000 | 0.7746 | 63.21% | | 3 | 1e-1 | 32 | 1 | 0.001 | 0.6321 | 1.0000 | 0.7746 | 63.21% | | 4 | 1e-1 | 32 | 2 | 0.001 | 0.6321 | 1.0000 | 0.7746 | 63.21% | | 5 | 1e-1 | 32 | 3 | 0.001 | 0.6321 | 1.0000 | 0.7746 | 63.21% | | 6 | 1e-1 | 32 | 3 | 0.01 | 0.6321 | 1.0000 | 0.7746 | 63.21% | | 7 | 2e-1 | 32 | 4 | 0.01 | 0.6321 | 1.0000 | 0.7746 | 63.21% | | 8 | 3e-4 | 32 | 4 | 0.01 | 0.6331 | 0.9982 | 0.7748 | 63.32% | | 9 | 2e-4 | 32 | 4 | 0.01 | 0.9908 | 0.9730 | 0.9818 | 97.72% | | 10 | 1e-5 | 32 | 4 | 0.01 | 0.9964 | 0.9928 | 0.9946 | 99.31% | | 11 | 1e-5 | 32 | 5 | 0.01 | 0.9964 | 0.9928 | 0.9946 | 99.31% | | **12** | **1e-5** | **16** | **5** | **0.01** | **1.0000**| **0.9964** | **0.9982** | **99.72%** | | 13 | 1e-5 | 16 | 5 | 0.1 | 1.0000 | 0.9946 | 0.9973 | 99.65% | | 14 | 1e-5 | 32 | 5 | 0.1 | 1.0000 | 0.9946 | 0.9973 | 99.65% | | 15 | 1e-5 | 32 | 5 | 1.0 | 0.9964 | 0.9946 | 0.9946 | 99.54% | | 16 | 1e-6 | 32 | 5 | 1.0 | 0.8342 | 0.9153 | 0.8729 | 83.15% | ### Limitations One of the main limitations of this work is the use of a synthetic dataset instead of real-world data to fine-tune and train the AI models. Although the dataset was carefully checked for accuracy, it may not fully reflect the complexity and diversity of actual healthcare records. ## 👤 Author **MD Abdul Barek** PhD student & GRA @ Intelligent Systems and Robotics - 🏫 University of West Florida, Florida, USA - 📧 [email protected] - 📧 [email protected] - 🔗 [Hugging Face Profile](https://huggingface.co/barek2k2) **Advisor:** Dr. Hakki Erhan Sevil Associate Professor Intelligent Systems and Robotics, University of West Florida 📧 [email protected] **Supervisors:** Dr. Guillermo Francia III Director, Research and Innovation, Center for Cybersecurity, University of West Florida 📧 [email protected] Dr. Hossain Shahriar Associate Director and Professor, Center for Cybersecurity, University of West Florida 📧 [email protected] Dr. Sheikh Iqbal Ahamed Wehr Professor and Founding Chair of Computer Science Department at Marquette University, Marquette University 📧 [email protected]
ilkhom199/llama3_wikiqa_merged
ilkhom199
2025-05-21T15:30:01Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-21T15:30:01Z
--- license: apache-2.0 ---
ryangensyn/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-eager_ferocious_nightingale
ryangensyn
2025-05-21T15:29:02Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am eager ferocious nightingale", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-1.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-14T02:40:17Z
--- base_model: Gensyn/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-eager_ferocious_nightingale tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am eager ferocious nightingale - unsloth - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-eager_ferocious_nightingale This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ryangensyn/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-eager_ferocious_nightingale", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
w-A-T-C-H-Sophie-Rain-Spiderman-New-Video/Full.Clip.Sophie.Rain.Spider-Man.Video.Tutorial.Official
w-A-T-C-H-Sophie-Rain-Spiderman-New-Video
2025-05-21T15:26:51Z
0
0
null
[ "region:us" ]
null
2025-05-21T15:26:03Z
18 seconds ago <a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a></p> <a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a></p> <p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p> Hot Shah Sapna Kumari MMS Original Video V𝐢ral Video L𝚎aᴋed on X Twitter Telegrama Shah Sapna Kumari Exclusive Latest Trending Now! L𝚎aᴋed Video Hot Shah Sapna Kumari MMS Original Video V𝐢ral Video L𝚎aᴋed on X Twitter Telegrama Shah Sapna Kumari viral video Full original LINK HD NOW Trending videos [-𝐅𝐔𝐋𝐋-𝐕𝐈𝐑𝐀𝐋-]— Actress Sah Sapna Kumari 2025 Viral Video Original Link On Social Media X Now ,Sapna Shah Viral Video Terabox Download Link , Sapna Kumari 16 Min Full Viral Video Original Link , Sapna Shah Viral Video Full 16 Min 6 Second Viral Video Full HD Quality [1U2M3N]
lukassso/movenet-myking
lukassso
2025-05-21T15:25:30Z
6
0
tf-keras
[ "tf-keras", "pose-detection", "region:us" ]
null
2025-05-19T11:38:00Z
--- tags: - pose-detection --- **[MoveNet](https://t.co/QpfnVL0YYI?amp=1)** is an ultra fast and accurate model that detects 17 keypoints of a body. The model is offered on [TF Hub](https://tfhub.dev/s?q=movenet) with two variants, known as Lightning and Thunder. Lightning is intended for latency-critical applications, while Thunder is intended for applications that require high accuracy. Both models run faster than real time (30+ FPS) on most modern desktops, laptops, and phones, which proves crucial for live fitness, health, and wellness applications. <img src="https://github.com/tensorflow/tfjs-models/raw/master/pose-detection/assets/combined_squat_dance.gif" alt="drawing"/> <img src="https://www.tensorflow.org/static/hub/tutorials/movenet_files/output_VHmTwACwFW-v_1.png" alt="ouline"/> *Images downloaded from Pexels (https://www.pexels.com/) This Colab walks you through the details of how to load MoveNet, and run inference on the input image and video below. Note: check out the [live demo](https://storage.googleapis.com/tfjs-models/demos/pose-detection/index.html?model=movenet) for how the model works!
intelehealth/Qwen3-4B-GRPO
intelehealth
2025-05-21T15:23:53Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "endpoints_compatible", "region:us" ]
null
2025-05-21T09:36:16Z
--- base_model: Qwen/Qwen3-4B library_name: transformers model_name: Qwen3-4B-GRPO tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen3-4B-GRPO This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="intelehealth/Qwen3-4B-GRPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/intelehealth-intelehealth/huggingface/runs/7t9xg1pq) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.1 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
DanielNRU/pollen-ner-1150
DanielNRU
2025-05-21T15:22:26Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-20T11:16:08Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-1150 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-1150 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1465 - Precision: 0.8606 - Recall: 0.9177 - F1: 0.8882 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 144 | 0.1529 | 0.8489 | 0.9137 | 0.8801 | | No log | 2.0 | 288 | 0.1490 | 0.8558 | 0.9177 | 0.8857 | | No log | 3.0 | 432 | 0.1499 | 0.8545 | 0.9197 | 0.8859 | | 0.2859 | 4.0 | 576 | 0.1465 | 0.8606 | 0.9177 | 0.8882 | | 0.2859 | 5.0 | 720 | 0.1566 | 0.8391 | 0.9217 | 0.8785 | | 0.2859 | 6.0 | 864 | 0.1514 | 0.8545 | 0.9197 | 0.8859 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
xw17/Qwen2.5-1.5B-Instruct_finetuned_4_optimized1_task_grouping_off_FT
xw17
2025-05-21T15:21:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T15:19:25Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Wicacy25/LLama7
Wicacy25
2025-05-21T15:19:49Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-21T15:19:40Z
--- license: apache-2.0 ---
nghodki/nl2api-meraki-1
nghodki
2025-05-21T15:19:39Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:distilbert/distilgpt2", "base_model:adapter:distilbert/distilgpt2", "region:us" ]
null
2025-05-21T15:19:20Z
--- base_model: distilgpt2 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
johngreendr1/8987e0de-687b-4850-a297-ea15d03705f8
johngreendr1
2025-05-21T15:18:09Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060", "base_model:adapter:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060", "region:us" ]
null
2025-05-21T13:54:27Z
--- base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
CyberTea/sbert_large_nlu_ru-onnx
CyberTea
2025-05-21T15:15:52Z
0
0
null
[ "onnx", "sentence-embeddings", "russian", "ru", "base_model:ai-forever/sbert_large_nlu_ru", "base_model:quantized:ai-forever/sbert_large_nlu_ru", "license:mit", "region:us" ]
null
2025-05-21T15:08:54Z
--- license: mit language: - ru base_model: - ai-forever/sbert_large_nlu_ru tags: - onnx - sentence-embeddings - russian --- # sbert_large_nlu_ru (ONNX) ## Описание Конвертированная в ONNX версия модели [ai-forever/sbert_large_nlu_ru](https://huggingface.co/ai-forever/sbert_large_nlu_ru ) для получения эмбеддингов предложений на русском языке. ## Метаданные - **Язык:** Русский - **Формат модели:** ONNX - **Лицензия:** MIT - **Авторы:** SberDevices Team tags: - onnx - sentence-embeddings - russian
lt-asset/nova-6.7b-bcr
lt-asset
2025-05-21T15:14:17Z
124
5
null
[ "safetensors", "llama", "arxiv:2311.13721", "license:bsd-3-clause-clear", "region:us" ]
null
2024-10-08T22:41:01Z
--- license: bsd-3-clause-clear --- # Nova: Generative Language Model For Assembly Code ## Abstract Binary code analysis is the foundation of crucial tasks in the security domain; thus building effective binary analysis techniques is more important than ever. Large language models (LLMs) although have brought impressive improvement to source code tasks, do not directly generalize to assembly code due to the unique challenges of assembly: (1) the low information density of assembly and (2) the diverse optimizations in assembly code. To overcome these challenges, this work proposes a hierarchical attention mechanism that builds attention summaries to capture the semantics more effectively and designs contrastive learning objectives to train LLMs to learn assembly optimization. Equipped with these techniques, this work develops Nova, a generative LLM for assembly code. Nova outperforms existing techniques on binary code decompilation by up to 14.84 -- 21.58% higher Pass@1 and Pass@10, and outperforms the latest binary code similarity detection techniques by up to 6.17% Recall@1, showing promising abilities on both assembly generation and understanding tasks. ## Introduction of Nova Nova is pre-trained with the language modeling objective starting from DeepSeek-Coder checkpoints, using the disassembly code from [AnghaBench](https://github.com/albertan017/LLM4Decompile) and C/C++ program compiled from [The-Stack](https://huggingface.co/datasets/bigcode/the-stack). This is the repository of the instruciton-tuned model of Nova that is good at binary code recovery, with 6.7B parameters. The other models in this series: - [Nova-1.3b](https://huggingface.co/lt-asset/nova-1.3b): Foundation model for binary code with 1.3B parameters. - [Nova-1.3b-bcr](https://huggingface.co/lt-asset/nova-1.3b-bcr): Nova-1.3b model further instruction-tuned for binary code recovery. - [Nova-6.7b](https://huggingface.co/lt-asset/nova-6.7b): Foundation model for binary code with 6.7B parameters. ## Usage ### Environment ```bash conda create -n nova python=3.10 conda activate nova pip install -r requirements.txt ``` Or use a docker image: ```bash docker pull jiang719/nova docker run --gpus all -it jiang719/nova ``` ### Binary Code Recovery Generation ```python from transformers import AutoTokenizer from modeling_nova import NovaTokenizer, NovaForCausalLM tokenizer = AutoTokenizer.from_pretrained('lt-asset/nova-6.7b-bcr', trust_remote_code=True) if not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0: print('Vocabulary:', len(tokenizer.get_vocab())) # 32280 tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id nova_tokenizer = NovaTokenizer(tokenizer) model = NovaForCausalLM.from_pretrained('lt-asset/nova-6.7b-bcr', torch_dtype=torch.bfloat16).eval() # load the humaneval-decompile dataset data = json.load(open('humaneval_decompile_nova_6.7b.json', 'r')) for item in data: print(item['task_id'], item['type']) prompt_before = f'# This is the assembly code with {item["type"]} optimization:\n<func0>:' asm = item['normalized_asm'].strip() assert asm.startswith('<func0>:') asm = asm[len('<func0>:'): ] prompt_after = '\nWhat is the source code?\n' inputs = prompt_before + asm + prompt_after # 0 for non-assembly code characters and 1 for assembly characters, required by nova tokenizer char_types = '0' * len(prompt_before) + '1' * len(asm) + '0' * len(prompt_after) tokenizer_output = nova_tokenizer.encode(inputs, '', char_types) input_ids = torch.LongTensor(tokenizer_output['input_ids'].tolist()).unsqueeze(0) nova_attention_mask = torch.LongTensor(tokenizer_output['nova_attention_mask']).unsqueeze(0) outputs = model.generate( inputs=input_ids.cuda(), max_new_tokens=512, temperature=0.2, top_p=0.95, num_return_sequences=20, do_sample=True, nova_attention_mask=nova_attention_mask.cuda(), no_mask_idx=torch.LongTensor([tokenizer_output['no_mask_idx']]).cuda(), pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id ) item['infer_c_func'] = [] for output in outputs: item['infer_c_func'].append({ 'c_func': tokenizer.decode(output[input_ids.size(1): ], skip_special_tokens=True, clean_up_tokenization_spaces=True) }) json.dump(data, open('humaneval_decompile_nova_6.7b.json', 'w'), indent=2) ``` ### Test Case Execution ```python import json import os import numpy as np import subprocess import math def re_compile(func, tmp_file): src = func.strip() + '\n' src += """ int main() { return 0; } """ os.chdir('/tmp/') with open(tmp_file, 'w') as wp: wp.write(src) try: subprocess.run( ["gcc", "-o", tmp_file.replace('.c', '.o'), tmp_file], check=True, stderr=subprocess.DEVNULL ) except Exception as e: return False return True def re_execute(func, test, tmp_file): os.chdir('/tmp/') with open(tmp_file, 'w') as wp: wp.write(func.strip() + '\n\n') wp.write(test) if os.path.exists(tmp_file.replace('.c', '.o')): os.remove(tmp_file.replace('.c', '.o')) try: subprocess.run( ["gcc", "-o", tmp_file.replace('.c', '.o'), tmp_file], check=True, stderr=subprocess.DEVNULL ) subprocess.run( [f"./{tmp_file.replace('.c', '.o')}"], check=True, stderr=subprocess.DEVNULL, timeout=5 ) except Exception as e: return False return True def validate_decompilation(file, wd): data = json.load(open(file, 'r')) execute_result = {'O0': [], 'O1': [], 'O2': [], 'O3': []} compile_result = {'O0': [], 'O1': [], 'O2': [], 'O3': []} for i, item in enumerate(data): compile_correct, execute_correct = 0, 0 for output in item['infer_c_func']: includes = [l for l in item['c_func'].splitlines() if l.startswith('#include')] includes = '\n'.join(includes) compile = re_compile(includes + '\n\n' + output['c_func'], 'temp.c') output['re-compile'] = compile if compile: compile_correct += 1 execute = re_execute(includes + '\n\n' + output['c_func'], item['c_test'], 'temp.c') output['re-execute'] = execute if execute: execute_correct += 1 compile_result[item['type']].append(compile_correct / len(item['infer_c_func'])) execute_result[item['type']].append(execute_correct / len(item['infer_c_func'])) print(item['task_id'], item['type'], compile_correct / len(item['infer_c_func']), execute_correct / len(item['infer_c_func'])) os.chdir(wd) json.dump(data, open(file, 'w'), indent=2) def calculate_passk(file, N=20, k=10): """ N: the number of recovery sampled for each task k: the valud of k in Pass@k """ def calculate_combinations(n, k): if n < k: return 0 return math.factorial(n) / (math.factorial(k) * math.factorial(n - k)) def passk(n, c, k): return 1 - calculate_combinations(n - c, k) / calculate_combinations(n, k) result = { 'O0-execute': [], 'O1-execute': [], 'O2-execute': [], 'O3-execute': [] } data = json.load(open(file, 'r')) for i, item in enumerate(data): compile = [output['re-compile'] for output in item['infer_c_func'][:N]] execute = [output['re-execute'] for output in item['infer_c_func'][:N]] compile_cnt = compile.count(True) compile = passk(N, compile_cnt, k) execute_cnt = execute.count(True) execute = passk(N, execute_cnt, k) result[f'{item["type"]}-execute'].append(execute) print('=======================================') for opt in result: print(f'Pass@{k}:', opt, np.mean(result[opt])) if __name__ == '__main__': result_file = 'humaneval_decompile_nova_6.7b.json' wd = os.getcwd() validate_decompilation(result_file, wd) calculate_passk(result_file, N=20, k=10) ``` ## Citation ``` @misc{jiang2024nova, title={Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning}, author={Nan Jiang and Chengxiao Wang and Kevin Liu and Xiangzhe Xu and Lin Tan and Xiangyu Zhang}, year={2024}, eprint={2311.13721}, archivePrefix={arXiv}, primaryClass={cs.SE}, url={https://arxiv.org/abs/2311.13721}, } ```
morturr/Mistral-7B-v0.1-amazon-seed-7-2025-05-21
morturr
2025-05-21T15:13:09Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-05-21T15:12:53Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - trl - sft - generated_from_trainer model-index: - name: Mistral-7B-v0.1-amazon-seed-7-2025-05-21 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-amazon-seed-7-2025-05-21 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 7 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
nimra-mehra-videos/Link.Video.18.jobz.hunting.nimra.video.nimra.mehra.video.nimra.mehra
nimra-mehra-videos
2025-05-21T15:12:36Z
0
0
null
[ "region:us" ]
null
2025-05-21T15:10:39Z
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=nimra-mehra) [🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=nimra-mehra) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=nimra-mehra)
filipesantoscv11/6fb6fa49-1f41-47bc-ac06-07fa7bcfc12e
filipesantoscv11
2025-05-21T15:12:25Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/mistral-7b-instruct-v0.2", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T14:39:44Z
--- base_model: unsloth/mistral-7b-instruct-v0.2 library_name: transformers model_name: 6fb6fa49-1f41-47bc-ac06-07fa7bcfc12e tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 6fb6fa49-1f41-47bc-ac06-07fa7bcfc12e This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="filipesantoscv11/6fb6fa49-1f41-47bc-ac06-07fa7bcfc12e", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/is7njw1d) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Juanillomodillo/Mammo
Juanillomodillo
2025-05-21T15:12:11Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-21T15:12:11Z
--- license: apache-2.0 ---
issoh/Qwen3-1.7B-thinking-basic-sft
issoh
2025-05-21T15:10:39Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "endpoints_compatible", "region:us" ]
null
2025-05-21T05:31:46Z
--- library_name: transformers model_name: Qwen3-1.7B-thinking-basic-sft tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen3-1.7B-thinking-basic-sft This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="issoh/Qwen3-1.7B-thinking-basic-sft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/issoh/Qwen3-1.7B-thinking-basic-sft/runs/5hhhyv98) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.53.0.dev0 - Pytorch: 2.2.2 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
HuggingPanda/docling-layout
HuggingPanda
2025-05-21T15:09:05Z
12
0
null
[ "safetensors", "rt_detr", "object-detection", "en", "base_model:ds4sd/docling-models", "base_model:finetune:ds4sd/docling-models", "region:us" ]
object-detection
2025-03-13T11:39:16Z
--- language: - en base_model: - ds4sd/docling-models pipeline_tag: object-detection --- # Docling Model for Layout This is the **Docling model for layout detection**, designed to facilitate easy importing and usage like any other Hugging Face model. This model is part of the [Docling repository](https://huggingface.co/ds4sd/docling-models), which provides document layout analysis tools. ## **Usage Example** Here's how you can load and use the model: ```python import torch from PIL import Image from transformers import RTDetrForObjectDetection, RTDetrImageProcessor # Load the model and processor image_processor = RTDetrImageProcessor.from_pretrained("HuggingPanda/docling-layout") model = RTDetrForObjectDetection.from_pretrained("HuggingPanda/docling-layout") # Load an image image = Image.open("hocr_output_page-0001.jpg") # Preprocess the image resize = {"height":640, "width":640} inputs = image_processor( images=image, return_tensors="pt", size=resize, ) # Perform inference with torch.no_grad(): outputs = model(**inputs) # Post-process results results = image_processor.post_process_object_detection( outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3 ) # Print detected objects for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label+1]}: {score:.2f} {box}") ``` ## **Model Information** - **Base Model:** RT-DETR (Robust Transformer-based Object Detector) - **Intended Use:** Layout detection for documents - **Framework:** [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) - **Dataset Used:** Internal dataset for document structure recognition - **License:** Apache 2.0 ## **Citing This Model** If you use this model in your work, please cite the main **Docling repository**: ``` @misc{docling2024, title={Docling Models for Document Layout Analysis}, author={DS4SD Team}, year={2024}, howpublished={Hugging Face Repository}, url={https://huggingface.co/ds4sd/docling-models} } ``` For more details, visit the main repo: [ds4sd/docling-models](https://huggingface.co/ds4sd/docling-models). ## **Contact** For questions or issues, please open a discussion on Hugging Face or contact [[email protected]].
DanielNRU/pollen-ner-1050
DanielNRU
2025-05-21T15:09:02Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-20T11:00:50Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-1050 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-1050 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1594 - Precision: 0.8285 - Recall: 0.9116 - F1: 0.8681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 132 | 0.1594 | 0.8285 | 0.9116 | 0.8681 | | No log | 2.0 | 264 | 0.1633 | 0.8270 | 0.9116 | 0.8672 | | No log | 3.0 | 396 | 0.1606 | 0.8266 | 0.9096 | 0.8662 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
NurErtug/MNLP_M2_mcqa_model
NurErtug
2025-05-21T15:08:56Z
0
1
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-17T13:08:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Yukinoo/NexusGS-llff
Yukinoo
2025-05-21T15:08:36Z
3
0
NexusGS
[ "NexusGS", "model_hub_mixin", "pytorch_model_hub_mixin", "arxiv:2503.18794", "region:us" ]
null
2025-05-16T12:33:25Z
--- library_name: NexusGS tags: - model_hub_mixin - pytorch_model_hub_mixin --- # NexusGS This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: https://github.com/USMizuki/NexusGS - Paper: https://arxiv.org/abs/2503.18794 - Docs: https://usmizuki.github.io/NexusGS/ This repository hosts multiple Gaussian Splatting models trained on the LLFF dataset using the [NexusGS](https://github.com/USMizuki/NexusGS) framework. Each scene is stored as a separate **Git branch**, and can be loaded individually via the `HFGaussianModel` class. ## 💡 Overview This model repository contains pre-trained Gaussian Splatting models for 8 LLFF scenes: - `fern` - `flower` - `fortress` - `horns` - `leaves` - `orchids` - `room` - `trex` Each branch contains: ``` ├── config.json ├── model.safetensors ``` The `main` branch contains all model files for convenience, named as: ``` config-<scene>.json model-<scene>.safetensors ``` ## 📦 Usage To use a specific scene model, use the `from_pretrained` method from [`scene.hf_gaussian_model.HFGaussianModel`](https://github.com/USMizuki/NexusGS/blob/main/scene/hf_gaussian_model.py): ```python from scene.hf_gaussian_model import HFGaussianModel # Load the 'fern' scene model = HFGaussianModel.from_pretrained("Yukinoo/NexusGS-llff", revision="fern") ``` Replace `"fern"` with any of the available scenes: `"flower"`, `"fortress"`, `"horns"`, `"leaves"`, `"orchids"`, `"room"`, `"trex"` ## 🔧 Source Code The `HFGaussianModel` loader is implemented in the [NexusGS repository](https://github.com/USMizuki/NexusGS/blob/main/scene/hf_gaussian_model.py). This utility enables seamless integration with Hugging Face Hub. ```python from scene.hf_gaussian_model import HFGaussianModel model = HFGaussianModel.from_pretrained("Yukinoo/NexusGS-llff", revision="scene_name") ``` ## 📜 License This model follows the license of the [NexusGS repository](https://github.com/USMizuki/NexusGS). Please check the original repo for details. --- For issues, contributions, or suggestions, feel free to open a discussion or pull request.
ngxson/Devstral-Small-Vision-2505-GGUF
ngxson
2025-05-21T15:06:54Z
0
3
null
[ "gguf", "image-text-to-text", "base_model:mistralai/Devstral-Small-2505", "base_model:quantized:mistralai/Devstral-Small-2505", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-05-21T14:40:54Z
--- license: apache-2.0 base_model: - mistralai/Devstral-Small-2505 pipeline_tag: image-text-to-text --- # Devstral Small with VISION support The vision encoder is taken from Mistral Small, works out-of-the-box with llama.cpp ```sh llama-server -hf ngxson/Devstral-Small-Vision-2505-GGUF ``` ## Demo <img width=500 src="https://cdn-uploads.huggingface.co/production/uploads/63ca214abedad7e2bf1d1517/0MqkN8cyZSwhnCFMepzNV.png" /> <img width=800 src="https://cdn-uploads.huggingface.co/production/uploads/63ca214abedad7e2bf1d1517/HDic53ANsCoJbiWu2eE6K.png" /> Output: <img width=500 src="https://cdn-uploads.huggingface.co/production/uploads/63ca214abedad7e2bf1d1517/onV1xfJIT8gzh81RkLn8J.png" />
Sucube131/lora-tyty_01
Sucube131
2025-05-21T15:04:15Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-21T14:18:58Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Tyty --- # Lora Tyty_01 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Tyty` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Tyty", "lora_weights": "https://huggingface.co/Sucube131/lora-tyty_01/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Sucube131/lora-tyty_01', weight_name='lora.safetensors') image = pipeline('Tyty').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Sucube131/lora-tyty_01/discussions) to add images that show off what you’ve made with this LoRA.
TestAPiCalls/apicalls
TestAPiCalls
2025-05-21T15:00:47Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-21T15:00:47Z
--- license: apache-2.0 ---
zaddyzaddy/Qwen-Bypass-Done
zaddyzaddy
2025-05-21T15:00:14Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-11T21:28:51Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: Qwen-2.5-7B results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen-2.5-7B-base-RAG-RL This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 2.19.0 - Tokenizers 0.20.3
kenchenxingyu/sealion-8B-lora-stance-sgmy_ACCOP_APATAP2025
kenchenxingyu
2025-05-21T15:00:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-21T14:59:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
reach-vb/Devstral-Small-2505
reach-vb
2025-05-21T14:58:57Z
0
0
vllm
[ "vllm", "safetensors", "mistral", "text-generation", "transformers", "text2text-generation", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "license:apache-2.0", "text-generation-inference", "region:us" ]
text2text-generation
2025-05-21T14:58:31Z
--- language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Devstrall-Small-2505 extra_gated_description: >- If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. pipeline_tag: text2text-generation tags: - transformers --- # Devstral-Small-2505 Devstral is an agentic LLM for software engineering tasks built under a collaboration between [Mistral AI](https://mistral.ai/) and [All Hands AI](https://www.all-hands.dev/) 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this [benchmark](#benchmark-results). It is finetuned from [Mistral-Small-3.1](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503), therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from `Mistral-Small-3.1` the vision encoder was removed. For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community. Learn more about Devstral in our [blog post](https://mistral.ai/news/devstral). ## Key Features: - **Agentic coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents. - **lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use. - **Apache 2.0 License**: Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window**: A 128k context window. - **Tokenizer**: Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark Results ### SWE-Bench Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%. | Model | Scaffold | SWE-Bench Verified (%) | |------------------|--------------------|------------------------| | Devstral | OpenHands Scaffold | **46.8** | | GPT-4.1-mini | OpenAI Scaffold | 23.6 | | Claude 3.5 Haiku | Anthropic Scaffold | 40.6 | | SWE-smith-LM 32B | SWE-agent Scaffold | 40.2 | When evaluated under the same test scaffold (OpenHands, provided by All Hands AI 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B. ![SWE Benchmark](assets/swe_bench.png) ## Usage We recommend to use Devstral with the [OpenHands](https://github.com/All-Hands-AI/OpenHands/tree/main) scaffold. You can use it either through our API or by running locally. ### API Follow these [instructions](https://docs.mistral.ai/getting-started/quickstart/#account-setup) to create a Mistral account and get an API key. Then run these commands to start the OpenHands docker container. ```bash export MISTRAL_API_KEY=<MY_KEY> docker pull docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik mkdir -p ~/.openhands-state && echo '{"language":"en","agent":"CodeActAgent","max_iterations":null,"security_analyzer":null,"confirmation_mode":false,"llm_model":"mistral/devstral-small-2505","llm_api_key":"'$MISTRAL_API_KEY'","remote_runtime_resource_factor":null,"github_token":null,"enable_default_condenser":true}' > ~/.openhands-state/settings.json docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.39-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.39 ``` ### Local inference The model can also be deployed with the following libraries: - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended) - [`mistral-inference`](https://github.com/mistralai/mistral-inference): See [here](#mistral-inference) - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers) - [`LMStudio`](https://lmstudio.ai/): See [here](#lmstudio) - [`ollama`](https://github.com/ollama/ollama): See [here](#ollama) ### OpenHands (recommended) #### Launch a server to deploy Devstral-Small-2505 Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with `Devstral-Small-2505`. In the case of the tutorial we spineed up a vLLM server running the command: ```bash vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` The server address should be in the following format: `http://<your-server-url>:8000/v1` #### Launch OpenHands You can follow installation of OpenHands [here](https://docs.all-hands.dev/modules/usage/installation). The easiest way to launch OpenHands is to use the Docker image: ```bash docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.38 ``` Then, you can access the OpenHands UI at `http://localhost:3000`. #### Connect to the server When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier. Fill the following fields: - **Custom Model**: `openai/mistralai/Devstral-Small-2505` - **Base URL**: `http://<your-server-url>:8000/v1` - **API Key**: `token` (or any other token you used to launch the server if any) #### Use OpenHands powered by Devstral Now you're good to use Devstral Small inside OpenHands by **starting a new conversation**. Let's build a To-Do list app. <details> <summary>To-Do list app</summary 1. Let's ask Devstral to generate the app with the following prompt: ```txt Build a To-Do list app with the following requirements: - Built using FastAPI and React. - Make it a one page app that: - Allows to add a task. - Allows to delete a task. - Allows to mark a task as done. - Displays the list of tasks. - Store the tasks in a SQLite database. ``` ![Agent prompting](assets/tuto_open_hands/agent_prompting.png) 2. Let's see the result You should see the agent construct the app and be able to explore the code it generated. If it doesn't do it automatically, ask Devstral to deploy the app or do it manually, and then go the front URL deployment to see the app. ![Agent working](assets/tuto_open_hands/agent_working.png) ![App UI](assets/tuto_open_hands/app_ui.png) 3. Iterate Now that you have a first result you can iterate on it by asking your agent to improve it. For example, in the app generated we could click on a task to mark it checked but having a checkbox would improve UX. You could also ask it to add a feature to edit a task, or to add a feature to filter the tasks by status. Enjoy building with Devstral Small and OpenHands! </details> ### vLLM (recommended) We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **_Installation_** Make sure you install [`vLLM >= 0.8.5`](https://github.com/vllm-project/vllm/releases/tag/v0.8.5): ``` pip install vllm --upgrade ``` Doing so should automatically install [`mistral_common >= 1.5.5`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.5). To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39). #### Server We recommand that you use Devstral in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Devstral-Small-2505 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2 ``` 2. To ping the client you can use a simple Python snippet. ```py import requests import json from huggingface_hub import hf_hub_download url = "http://<your-server-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Devstral-Small-2505" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "<your-command>", }, ], }, ] data = {"model": model, "messages": messages, "temperature": 0.15} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) ``` ### Mistral-inference We recommend using mistral-inference to quickly try out / "vibe-check" Devstral. #### Install Make sure to have mistral_inference >= 1.6.0 installed. ```bash pip install mistral_inference --upgrade ``` #### Download ```python from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Devstral') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Devstral-Small-2505", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path) ``` #### Python You can run the model using the following command: ```bash mistral-chat $HOME/mistral_models/Devstral --instruct --max_tokens 300 ``` You can then prompt it with anything you'd like. ### Transformers To make the best use of our model with transformers make sure to have [installed](https://github.com/mistralai/mistral-common) ` mistral-common >= 1.5.5` to use our tokenizer. ```bash pip install mistral-common --upgrade ``` Then load our tokenizer along with the model and generate: ```python import torch from mistral_common.protocol.instruct.messages import ( SystemMessage, UserMessage ) from mistral_common.protocol.instruct.request import ChatCompletionRequest from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy from huggingface_hub import hf_hub_download from transformers import AutoModelForCausalLM def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() return system_prompt model_id = "mistralai/Devstral-Small-2505" tekken_file = hf_hub_download(repo_id=model_id, filename="tekken.json") SYSTEM_PROMPT = load_system_prompt(model_id, "SYSTEM_PROMPT.txt") tokenizer = MistralTokenizer.from_file(tekken_file) model = AutoModelForCausalLM.from_pretrained(model_id) tokenized = tokenizer.encode_chat_completion( ChatCompletionRequest( messages=[ SystemMessage(content=SYSTEM_PROMPT), UserMessage(content="<your-command>"), ], ) ) output = model.generate( input_ids=torch.tensor([tokenized.tokens]), max_new_tokens=1000, )[0] decoded_output = tokenizer.decode(output[len(tokenized.tokens):]) print(decoded_output) ``` ### LMStudio Download the weights from huggingface: ``` pip install -U "huggingface_hub[cli]" huggingface-cli download \ "mistralai/Devstral-Small-2505_gguf" \ --include "devstralQ4_K_M.gguf" \ --local-dir "mistralai/Devstral-Small-2505_gguf/" ``` You can serve the model locally with [LMStudio](https://lmstudio.ai/). * Download [LM Studio](https://lmstudio.ai/) and install it * Install `lms cli ~/.lmstudio/bin/lms bootstrap` * In a bash terminal, run `lms import devstralQ4_K_M.gguf` in the directory where you've downloaded the model checkpoint (e.g. `mistralai/Devstral-Small-2505_gguf`) * Open the LMStudio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Q4 K M. Toggle the status button to start the model, in setting toggle Serve on Local Network to be on. * On the right tab, you will see an API identifier which should be devstralq4_k_m and an api address under API Usage. Keep note of this address, we will use it in the next step. Launch Openhands You can now interact with the model served from LM Studio with openhands. Start the openhands server with the docker ```bash docker pull docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik docker run -it --rm --pull=always \ -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.38-nikolaik \ -e LOG_ALL_EVENTS=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v ~/.openhands-state:/.openhands-state \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name openhands-app \ docker.all-hands.dev/all-hands-ai/openhands:0.38 ``` Click “see advanced setting” on the second line. In the new tab, toggle advanced to on. Set the custom model to be mistral/devstralq4_k_m and Base URL the api address we get from the last step in LM Studio. Set API Key to dummy. Click save changes. ### Ollama You can run Devstral using the [Ollama](https://ollama.ai/) CLI. ```bash ollama run devstral ```
scanton/MNLP_M2_rag_model
scanton
2025-05-21T14:58:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T13:45:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
scanton/MNLP_M2_document_encoder
scanton
2025-05-21T14:56:45Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-05-21T13:01:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arshiaafshani/arshGpt2
arshiaafshani
2025-05-21T14:55:42Z
0
1
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "dataset:allenai/dolmino-mix-1124", "base_model:arshiaafshani/arshGpt", "base_model:finetune:arshiaafshani/arshGpt", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T13:38:43Z
--- library_name: transformers license: mit base_model: arshiaafshani/arshGpt model-index: - name: arshGpt2 results: [] datasets: - allenai/dolmino-mix-1124 pipeline_tag: text-generation ---
FormlessAI/40930eac-e0d5-4ea8-9c96-8bc15686d687
FormlessAI
2025-05-21T14:55:36Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "unsloth", "base_model:unsloth/Qwen2-1.5B", "base_model:finetune:unsloth/Qwen2-1.5B", "endpoints_compatible", "region:us" ]
null
2025-05-21T14:05:28Z
--- base_model: unsloth/Qwen2-1.5B library_name: transformers model_name: 40930eac-e0d5-4ea8-9c96-8bc15686d687 tags: - generated_from_trainer - trl - sft - unsloth licence: license --- # Model Card for 40930eac-e0d5-4ea8-9c96-8bc15686d687 This model is a fine-tuned version of [unsloth/Qwen2-1.5B](https://huggingface.co/unsloth/Qwen2-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/40930eac-e0d5-4ea8-9c96-8bc15686d687", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/asp1mmsi) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.7.0+cu118 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sergioalves/ca84a72d-308d-4468-834e-aac0211984eb
sergioalves
2025-05-21T14:54:39Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060", "base_model:quantized:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T14:33:24Z
--- base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060 library_name: transformers model_name: ca84a72d-308d-4468-834e-aac0211984eb tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for ca84a72d-308d-4468-834e-aac0211984eb This model is a fine-tuned version of [The-matt/llama2_ko-7b_distinctive-snowflake-182_1060](https://huggingface.co/The-matt/llama2_ko-7b_distinctive-snowflake-182_1060). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sergioalves/ca84a72d-308d-4468-834e-aac0211984eb", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/mge2m4w8) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Bisher/train_run-gemma-3-4b-it-arabic-diacritization_10k
Bisher
2025-05-21T14:53:05Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:google/gemma-3-4b-it", "base_model:adapter:google/gemma-3-4b-it", "license:gemma", "region:us" ]
null
2025-05-21T08:10:00Z
--- library_name: peft license: gemma base_model: google/gemma-3-4b-it tags: - generated_from_trainer model-index: - name: train_run-gemma-3-4b-it-arabic-diacritization_10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_run-gemma-3-4b-it-arabic-diacritization_10k This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4496 | 0.32 | 200 | 0.1216 | | 0.4045 | 0.64 | 400 | 0.0979 | | 0.3557 | 0.96 | 600 | 0.0929 | ### Framework versions - PEFT 0.14.0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
xw17/Qwen2.5-1.5B-Instruct_finetuned_1_optimized1_task_grouping_off_FT
xw17
2025-05-21T14:52:29Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-21T14:50:33Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DanielNRU/pollen-ner-850
DanielNRU
2025-05-21T14:50:00Z
5
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-20T10:35:19Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-850 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-850 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1728 - Precision: 0.8190 - Recall: 0.8996 - F1: 0.8574 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 107 | 0.1888 | 0.7972 | 0.8996 | 0.8453 | | No log | 2.0 | 214 | 0.1746 | 0.8135 | 0.8936 | 0.8517 | | No log | 3.0 | 321 | 0.1690 | 0.8250 | 0.8896 | 0.8560 | | No log | 4.0 | 428 | 0.1728 | 0.8190 | 0.8996 | 0.8574 | | 0.3522 | 5.0 | 535 | 0.1822 | 0.8050 | 0.9036 | 0.8515 | | 0.3522 | 6.0 | 642 | 0.1829 | 0.7961 | 0.9016 | 0.8456 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
litmudoc/Qwen2.5-VL-32B-Instruct-abliterated-MLX-Q8
litmudoc
2025-05-21T14:47:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "multimodal", "abliterated", "uncensored", "mlx", "conversational", "en", "base_model:Qwen/Qwen2.5-VL-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-32B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-05-21T14:35:01Z
--- license: apache-2.0 language: - en pipeline_tag: image-text-to-text tags: - multimodal - abliterated - uncensored - mlx library_name: transformers base_model: - Qwen/Qwen2.5-VL-32B-Instruct --- # litmudoc/Qwen2.5-VL-32B-Instruct-abliterated-MLX-Q8 This model was converted to MLX format from [`huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated`]() using mlx-vlm version **0.1.26**. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model litmudoc/Qwen2.5-VL-32B-Instruct-abliterated-MLX-Q8 --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
rl-bandits-lab/hhlrhf_rm
rl-bandits-lab
2025-05-21T14:46:28Z
0
0
null
[ "safetensors", "llama", "en", "dataset:Dahoas/full-hh-rlhf", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:mit", "region:us" ]
null
2025-05-21T09:53:21Z
--- license: mit datasets: - Dahoas/full-hh-rlhf language: - en base_model: - meta-llama/Llama-3.1-8B-Instruct ---
vmpsergio/7929f35b-b496-4f2f-811b-971589b4c37e
vmpsergio
2025-05-21T14:46:20Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/Qwen2-7B", "base_model:quantized:unsloth/Qwen2-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-21T14:01:50Z
--- base_model: unsloth/Qwen2-7B library_name: transformers model_name: 7929f35b-b496-4f2f-811b-971589b4c37e tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for 7929f35b-b496-4f2f-811b-971589b4c37e This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vmpsergio/7929f35b-b496-4f2f-811b-971589b4c37e", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/vd0ge83h) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
KristjanRRR/HiDream-I1-Full-ink-drawing-lora-3
KristjanRRR
2025-05-21T14:45:45Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "hidream", "hidream-diffusers", "template:sd-lora", "base_model:HiDream-ai/HiDream-I1-Full", "base_model:adapter:HiDream-ai/HiDream-I1-Full", "license:mit", "region:us" ]
text-to-image
2025-05-20T13:47:06Z
--- base_model: HiDream-ai/HiDream-I1-Full library_name: diffusers license: mit instance_prompt: TOK8 widget: - text: a woman riding an orca while waving hello and working on her laptop in the style of TOK8 output: url: image_0.png - text: a woman riding an orca while waving hello and working on her laptop in the style of TOK8 output: url: image_1.png - text: a woman riding an orca while waving hello and working on her laptop in the style of TOK8 output: url: image_2.png - text: a woman riding an orca while waving hello and working on her laptop in the style of TOK8 output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - hidream - hidream-diffusers - template:sd-lora - text-to-image - diffusers-training - diffusers - lora - hidream - hidream-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # HiDream Image DreamBooth LoRA - KristjanRRR/HiDream-I1-Full-ink-drawing-lora-3 <Gallery /> ## Model description These are KristjanRRR/HiDream-I1-Full-ink-drawing-lora-3 DreamBooth LoRA weights for HiDream-ai/HiDream-I1-Full. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [HiDream Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_hidream.md). ## Trigger words You should use `TOK8` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](KristjanRRR/HiDream-I1-Full-ink-drawing-lora-3/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py >>> import torch >>> from transformers import PreTrainedTokenizerFast, LlamaForCausalLM >>> from diffusers import HiDreamImagePipeline >>> tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct") >>> text_encoder_4 = LlamaForCausalLM.from_pretrained( ... "meta-llama/Meta-Llama-3.1-8B-Instruct", ... output_hidden_states=True, ... output_attentions=True, ... torch_dtype=torch.bfloat16, ... ) >>> pipe = HiDreamImagePipeline.from_pretrained( ... "HiDream-ai/HiDream-I1-Full", ... tokenizer_4=tokenizer_4, ... text_encoder_4=text_encoder_4, ... torch_dtype=torch.bfloat16, ... ) >>> pipe.enable_model_cpu_offload() >>> pipe.load_lora_weights(f"KristjanRRR/HiDream-I1-Full-ink-drawing-lora-3") >>> image = pipe(f"TOK8").images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
ktrin-u/clearflow-drain-classifier
ktrin-u
2025-05-21T14:45:26Z
10
0
keras
[ "keras", "en", "license:apache-2.0", "region:us" ]
null
2025-05-19T09:27:43Z
--- license: apache-2.0 language: - en metrics: - accuracy - f1 library_name: keras ---
VauntedVessel/Terrain_Recognition_Using_CNN
VauntedVessel
2025-05-21T14:44:51Z
0
0
null
[ "image-classification", "license:mit", "region:us" ]
image-classification
2025-05-21T06:23:11Z
--- license: mit pipeline_tag: image-classification ---
Sowmyagnaneshwar/english-telugu-colloquial-translator
Sowmyagnaneshwar
2025-05-21T14:44:49Z
0
0
peft
[ "peft", "safetensors", "unsloth", "generated_from_trainer", "base_model:unsloth/tinyllama-chat-bnb-4bit", "base_model:adapter:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "region:us" ]
null
2025-04-02T14:54:07Z
--- library_name: peft license: apache-2.0 base_model: unsloth/tinyllama-chat-bnb-4bit tags: - unsloth - generated_from_trainer model-index: - name: english-telugu-colloquial-translator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # english-telugu-colloquial-translator This model is a fine-tuned version of [unsloth/tinyllama-chat-bnb-4bit](https://huggingface.co/unsloth/tinyllama-chat-bnb-4bit) on the None dataset. It achieves the following results on the evaluation set: - Loss: 9.8203 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 13.7522 | 2.0 | 2 | 9.8203 | | 13.7522 | 4.0 | 4 | 9.8203 | | 13.7522 | 6.0 | 6 | 9.8203 | | 13.7522 | 8.0 | 8 | 9.8203 | | 13.7522 | 10.0 | 10 | 9.8203 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
DanielNRU/pollen-ner-800
DanielNRU
2025-05-21T14:43:55Z
3
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:DeepPavlov/rubert-base-cased", "base_model:adapter:DeepPavlov/rubert-base-cased", "region:us" ]
null
2025-05-19T14:14:24Z
--- library_name: peft base_model: DeepPavlov/rubert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: pollen-ner-800 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollen-ner-800 This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1787 - Precision: 0.8080 - Recall: 0.8956 - F1: 0.8495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 100 | 0.1867 | 0.7903 | 0.8855 | 0.8352 | | No log | 2.0 | 200 | 0.1838 | 0.7996 | 0.8896 | 0.8422 | | No log | 3.0 | 300 | 0.1801 | 0.8073 | 0.8916 | 0.8473 | | No log | 4.0 | 400 | 0.1787 | 0.8080 | 0.8956 | 0.8495 | | 0.3702 | 5.0 | 500 | 0.1829 | 0.8004 | 0.8936 | 0.8444 | | 0.3702 | 6.0 | 600 | 0.1824 | 0.8018 | 0.8936 | 0.8452 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1
VauntedVessel/Pneumonia_Detector_CNN
VauntedVessel
2025-05-21T14:43:22Z
4
1
keras
[ "keras", "image-classification", "license:mit", "region:us" ]
image-classification
2025-05-21T08:20:54Z
--- license: mit pipeline_tag: image-classification ---
Armandotrsg/qwen-cybersecurity-2.5-7b-gguf
Armandotrsg
2025-05-21T14:43:00Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-21T14:39:18Z
--- base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Armandotrsg - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF
unsloth
2025-05-21T14:40:22Z
36,516
63
vllm
[ "vllm", "gguf", "mistral3", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503", "base_model:quantized:mistralai/Mistral-Small-3.1-24B-Instruct-2503", "license:apache-2.0", "region:us", "conversational" ]
null
2025-03-18T20:30:50Z
--- language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Mistral-Small-3.1-24B-Instruct-2503 extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. --- > [!NOTE] > Now with Vision support added! > <div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>See <a href="https://huggingface.co/collections/unsloth/mistral-small-3-all-versions-679fe9a4722f40d61cfe627c">our collection</a> for all versions of Mistral 3.1 including GGUF, 4-bit & 16-bit formats.</strong> </p> <p style="margin-bottom: 0;"> <em>Learn to run Fine-tune correctly - <a href="https://docs.unsloth.ai/get-started/fine-tuning-guide">Read our Guide</a>.</em> </p> <p style="margin-top: 0;margin-bottom: 0;"> <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em> </p> <div style="display: flex; gap: 5px; align-items: center; "> <a href="https://github.com/unslothai/unsloth/"> <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133"> </a> <a href="https://discord.gg/unsloth"> <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173"> </a> <a href="https://docs.unsloth.ai/"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143"> </a> </div> <h1 style="margin-top: 0rem;">✨ Run & Fine-tune Mistral with Unsloth!</h1> </div> - Fine-tune Mistral v0.3 (7B)) for free using our Google [Colab notebook here](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb)! - Read our Blog about Devstral support: [docs.unsloth.ai/basics/devstral](https://docs.unsloth.ai/basics/devstral) - View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks). - # Model Card for Mistral-Small-3.1-24B-Instruct-2503 Building upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) **adds state-of-the-art vision understanding** and enhances **long context capabilities up to 128k tokens** without compromising text performance. With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks. This model is an instruction-finetuned version of: [Mistral-Small-3.1-24B-Base-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503). Mistral Small 3.1 can be deployed locally and is exceptionally "knowledge-dense," fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized. It is ideal for: - Fast-response conversational agents. - Low-latency function calling. - Subject matter experts via fine-tuning. - Local inference for hobbyists and organizations handling sensitive data. - Programming and math reasoning. - Long document understanding. - Visual understanding. For enterprises requiring specialized capabilities (increased context, specific modalities, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community. Learn more about Mistral Small 3.1 in our [blog post](https://mistral.ai/news/mistral-small-3-1/). ## Key Features - **Vision:** Vision capabilities enable the model to analyze images and provide insights based on visual content in addition to text. - **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, Farsi. - **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting. - **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities. - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window:** A 128k context window. - **System Prompt:** Maintains strong adherence and support for system prompts. - **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark Results When available, we report numbers previously published by other model providers, otherwise we re-evaluate them using our own evaluation harness. ### Pretrain Evals | Model | MMLU (5-shot) | MMLU Pro (5-shot CoT) | TriviaQA | GPQA Main (5-shot CoT)| MMMU | |--------------------------------|---------------|-----------------------|------------|-----------------------|-----------| | **Small 3.1 24B Base** | **81.01%** | **56.03%** | 80.50% | **37.50%** | **59.27%**| | Gemma 3 27B PT | 78.60% | 52.20% | **81.30%** | 24.30% | 56.10% | ### Instruction Evals #### Text | Model | MMLU | MMLU Pro (5-shot CoT) | MATH | GPQA Main (5-shot CoT) | GPQA Diamond (5-shot CoT )| MBPP | HumanEval | SimpleQA (TotalAcc)| |--------------------------------|-----------|-----------------------|------------------------|------------------------|---------------------------|-----------|-----------|--------------------| | **Small 3.1 24B Instruct** | 80.62% | 66.76% | 69.30% | **44.42%** | **45.96%** | 74.71% | **88.41%**| **10.43%** | | Gemma 3 27B IT | 76.90% | **67.50%** | **89.00%** | 36.83% | 42.40% | 74.40% | 87.80% | 10.00% | | GPT4o Mini | **82.00%**| 61.70% | 70.20% | 40.20% | 39.39% | 84.82% | 87.20% | 9.50% | | Claude 3.5 Haiku | 77.60% | 65.00% | 69.20% | 37.05% | 41.60% | **85.60%**| 88.10% | 8.02% | | Cohere Aya-Vision 32B | 72.14% | 47.16% | 41.98% | 34.38% | 33.84% | 70.43% | 62.20% | 7.65% | #### Vision | Model | MMMU | MMMU PRO | Mathvista | ChartQA | DocVQA | AI2D | MM MT Bench | |--------------------------------|------------|-----------|-----------|-----------|-----------|-------------|-------------| | **Small 3.1 24B Instruct** | 64.00% | **49.25%**| **68.91%**| 86.24% | **94.08%**| **93.72%** | **7.3** | | Gemma 3 27B IT | **64.90%** | 48.38% | 67.60% | 76.00% | 86.60% | 84.50% | 7 | | GPT4o Mini | 59.40% | 37.60% | 56.70% | 76.80% | 86.70% | 88.10% | 6.6 | | Claude 3.5 Haiku | 60.50% | 45.03% | 61.60% | **87.20%**| 90.00% | 92.10% | 6.5 | | Cohere Aya-Vision 32B | 48.20% | 31.50% | 50.10% | 63.04% | 72.40% | 82.57% | 4.1 | ### Multilingual Evals | Model | Average | European | East Asian | Middle Eastern | |--------------------------------|------------|------------|------------|----------------| | **Small 3.1 24B Instruct** | **71.18%** | **75.30%** | **69.17%** | 69.08% | | Gemma 3 27B IT | 70.19% | 74.14% | 65.65% | 70.76% | | GPT4o Mini | 70.36% | 74.21% | 65.96% | **70.90%** | | Claude 3.5 Haiku | 70.16% | 73.45% | 67.05% | 70.00% | | Cohere Aya-Vision 32B | 62.15% | 64.70% | 57.61% | 64.12% | ### Long Context Evals | Model | LongBench v2 | RULER 32K | RULER 128K | |--------------------------------|-----------------|-------------|------------| | **Small 3.1 24B Instruct** | **37.18%** | **93.96%** | 81.20% | | Gemma 3 27B IT | 34.59% | 91.10% | 66.00% | | GPT4o Mini | 29.30% | 90.20% | 65.8% | | Claude 3.5 Haiku | 35.19% | 92.60% | **91.90%** | ## Basic Instruct Template (V7-Tekken) ``` <s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST] ``` *`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.* ***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth*** ## Usage The model can be used with the following frameworks; - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm) **Note 1**: We recommend using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt: ``` system_prompt = """You are Mistral Small 3.1, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. You power an AI assistant called Le Chat. Your knowledge base was last updated on 2023-10-01. The current date is {today}. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?"). You are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date. You follow these instructions in all languages, and always respond to the user in the language they use or request. Next sections describe the capabilities that you have. # WEB BROWSING INSTRUCTIONS You cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat. # MULTI-MODAL INSTRUCTIONS You have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos. You cannot read nor transcribe audio files or videos.""" ``` ### vLLM (recommended) We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **_Installation_** Make sure you install [`vLLM nightly`](https://github.com/vllm-project/vllm/): ``` pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly --upgrade ``` Doing so should automatically install [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4). To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39) followed by a nightly install of vllm as shown above. #### Server We recommand that you use Mistral-Small-3.1-24B-Instruct-2503 in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Mistral-Small-3.1-24B-Instruct-2503 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size 2 ``` **Note:** Running Mistral-Small-3.1-24B-Instruct-2503 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. ```py import requests import json from huggingface_hub import hf_hub_download from datetime import datetime, timedelta url = "http://<your-server-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-3.1-24B-Instruct-2503" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") image_url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/europe.png" messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "Which of the depicted countries has the best food? Which the second and third and fourth? Name the country, its color on the map and one its city that is visible on the map, but is not the capital. Make absolutely sure to only name a city that can be seen on the map.", }, {"type": "image_url", "image_url": {"url": image_url}}, ], }, ] data = {"model": model, "messages": messages, "temperature": 0.15} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) # Determining the "best" food is highly subjective and depends on personal preferences. However, based on general popularity and recognition, here are some countries known for their cuisine: # 1. **Italy** - Color: Light Green - City: Milan # - Italian cuisine is renowned worldwide for its pasta, pizza, and various regional specialties. # 2. **France** - Color: Brown - City: Lyon # - French cuisine is celebrated for its sophistication, including dishes like coq au vin, bouillabaisse, and pastries like croissants and éclairs. # 3. **Spain** - Color: Yellow - City: Bilbao # - Spanish cuisine offers a variety of flavors, from paella and tapas to jamón ibérico and churros. # 4. **Greece** - Not visible on the map # - Greek cuisine is known for dishes like moussaka, souvlaki, and baklava. Unfortunately, Greece is not visible on the provided map, so I cannot name a city. # Since Greece is not visible on the map, I'll replace it with another country known for its good food: # 4. **Turkey** - Color: Light Green (east part of the map) - City: Istanbul # - Turkish cuisine is diverse and includes dishes like kebabs, meze, and baklava. ``` ### Function calling Mistral-Small-3.1-24-Instruct-2503 is excellent at function / tool calling tasks via vLLM. *E.g.:* <details> <summary>Example</summary> ```py import requests import json from huggingface_hub import hf_hub_download from datetime import datetime, timedelta url = "http://<your-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-3.1-24B-Instruct-2503" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to find the weather for, e.g. 'San Francisco'", }, "state": { "type": "string", "description": "The state abbreviation, e.g. 'CA' for California", }, "unit": { "type": "string", "description": "The unit for temperature", "enum": ["celsius", "fahrenheit"], }, }, "required": ["city", "state", "unit"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.", }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "bbc5b7ede", "type": "function", "function": { "name": "rewrite", "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}', }, } ], }, { "role": "tool", "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}', "tool_call_id": "bbc5b7ede", "name": "rewrite", }, { "role": "assistant", "content": "---\n\nOpenAI is a FOR-profit company.", }, { "role": "user", "content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?", }, ] data = {"model": model, "messages": messages, "tools": tools, "temperature": 0.15} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["tool_calls"]) # [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}] ``` </details> #### Offline ```py from vllm import LLM from vllm.sampling_params import SamplingParams from datetime import datetime, timedelta SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." user_prompt = "Give me 5 non-formal ways to say 'See you later' in French." messages = [ { "role": "system", "content": SYSTEM_PROMPT }, { "role": "user", "content": user_prompt }, ] model_name = "mistralai/Mistral-Small-3.1-24B-Instruct-2503" # note that running this model on GPU requires over 60 GB of GPU RAM llm = LLM(model=model_name, tokenizer_mode="mistral") sampling_params = SamplingParams(max_tokens=512, temperature=0.15) outputs = llm.chat(messages, sampling_params=sampling_params) print(outputs[0].outputs[0].text) # Here are five non-formal ways to say "See you later" in French: # 1. **À plus tard** - Until later # 2. **À toute** - See you soon (informal) # 3. **Salut** - Bye (can also mean hi) # 4. **À plus** - See you later (informal) # 5. **Ciao** - Bye (informal, borrowed from Italian) # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Transformers (untested) Transformers-compatible model weights are also uploaded (thanks a lot @cyrilvallez). However the transformers implementation was **not throughly tested**, but only on "vibe-checks". Hence, we can only ensure 100% correct behavior when using the original weight format with vllm (see above).
magnifi/parser_user_v42a_epoch_6_lr_0.002
magnifi
2025-05-21T14:38:58Z
0
0
null
[ "safetensors", "mistral", "license:apache-2.0", "region:us" ]
null
2025-05-21T14:31:57Z
--- license: apache-2.0 ---
MarioGL/emotion
MarioGL
2025-05-21T14:38:40Z
0
0
fastai
[ "fastai", "region:us" ]
null
2025-05-13T19:04:21Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed