modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-24 00:43:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
573 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-24 00:37:34
card
stringlengths
11
1.01M
lqpl/blockassist-bc-hairy_insectivorous_antelope_1756114851
lqpl
2025-08-25T09:43:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy insectivorous antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:41:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy insectivorous antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
NatLibFi/Annif-LLMs4Subjects-data
NatLibFi
2025-08-25T09:42:37Z
0
0
null
[ "annif", "text-classification", "en", "de", "arxiv:2504.19675", "arxiv:2504.07199", "region:us" ]
text-classification
2025-02-12T08:10:45Z
--- language: - en - de pipeline_tag: text-classification tags: - annif --- # Annif-LLMs4Subjects-data Models trained for participating to [SemEval25 Task 5 - LLMs4Subjects](https://sites.google.com/view/llms4subjects/) with [Annif](https://github.com/NatLibFi/Annif). Please see our system description paper: _Suominen, O., Inkinen, J., & Lehtinen, M. (2025). Annif at SemEval-2025 Task 5: Traditional XMTC augmented by LLMs._ https://arxiv.org/abs/2504.19675 See also the task description preprint: _D'Souza, J., Sadruddin, S., Israel, H., Begoin, M. & Slawig, D. (2025). SemEval-2025 Task 5: LLMs4Subjects -- LLM-based Automated Subject Tagging for a National Technical Library's Open-Access Catalog_ https://arxiv.org/abs/2504.07199 ## Usage Use the `annif download` command to download selected projects with Annif; for example, to download all projects in this repository run annif download "*" NatLibFi/Annif-LLMs4Subjects-data <!--- start-of-autoupdating-part ---> ## Projects ``` Project ID Project Name Vocabulary ID Language ------------------------------------------------------------------------------------------ gnd-all-bm-ensemble-de GND-all BM ensemble German gnd-all de gnd-all-bm-ensemble-en GND-all BM ensemble English gnd-all en gnd-all-bmx-ensemble-de GND-all BMX ensemble German gnd-all de gnd-all-bmx-ensemble-en GND-all BMX ensemble English gnd-all en gnd-all-bonsai-de GND-all Omikuji Bonsai German gnd-all de gnd-all-bonsai-en GND-all Omikuji Bonsai English gnd-all en gnd-all-mllm-de GND-all MLLM German gnd-all de gnd-all-mllm-en GND-all MLLM English gnd-all en gnd-all-nn_ensemble-de GND-all NN ensemble German gnd-all de gnd-all-nn_ensemble-en GND-all NN ensemble English gnd-all en gnd-all-xtransformer-de GND-all XTransformer German gnd-all de gnd-all-xtransformer-en GND-all XTransformer English gnd-all en gnd-tib-core-bm-ensemble-de GND-tib-core BM ensemble German gnd-tib-core de gnd-tib-core-bm-ensemble-en GND-tib-core BM ensemble English gnd-tib-core en gnd-tib-core-bmx-ensemble-de GND-tib-core BMX ensemble German gnd-tib-core de gnd-tib-core-bmx-ensemble-en GND-tib-core BMX ensemble English gnd-tib-core en gnd-tib-core-bonsai-de GND-tib-core Omikuji Bonsai German gnd-tib-core de gnd-tib-core-bonsai-en GND-tib-core Omikuji Bonsai English gnd-tib-core en gnd-tib-core-mllm-de GND-tib-core MLLM German gnd-tib-core de gnd-tib-core-mllm-en GND-tib-core MLLM English gnd-tib-core en gnd-tib-core-nn_ensemble-de GND-tib-core NN ensemble German gnd-tib-core de gnd-tib-core-nn_ensemble-en GND-tib-core NN ensemble English gnd-tib-core en gnd-tib-core-xtransformer-de GND-tib-core XTransformer German gnd-tib-core de gnd-tib-core-xtransformer-en GND-tib-core XTransformer English gnd-tib-core en ``` <!--- end-of-autoupdating-part --->
liukevin666/blockassist-bc-yawning_striped_cassowary_1756114817
liukevin666
2025-08-25T09:42:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:41:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
swapanbarik89/blockassist-bc-gliding_zealous_spider_1756114882
swapanbarik89
2025-08-25T09:42:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gliding zealous spider", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:42:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gliding zealous spider --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1756113381
koloni
2025-08-25T09:42:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:42:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756114867
Ferdi3425
2025-08-25T09:41:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:41:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756113302
capungmerah627
2025-08-25T09:40:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stinging soaring porcupine", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:40:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stinging soaring porcupine --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756114695
Ferdi3425
2025-08-25T09:38:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:38:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756114686
eusuf01
2025-08-25T09:38:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:38:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zavodman332/blockassist-bc-sharp_aquatic_hare_1756114647
zavodman332
2025-08-25T09:38:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sharp aquatic hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:37:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sharp aquatic hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jonasaise/oellm_tokenizer_262k_v1
jonasaise
2025-08-25T09:37:15Z
0
1
tokenizers
[ "tokenizers", "en", "sv", "de", "fr", "es", "it", "fi", "bg", "cs", "da", "el", "et", "hr", "hu", "ga", "lv", "lt", "mt", "nl", "pl", "pt", "ro", "sl", "sk", "ca", "eu", "gl", "bs", "ka", "mk", "sq", "sr", "tr", "uk", "is", "no", "license:apache-2.0", "region:us" ]
null
2025-08-11T10:25:49Z
--- license: apache-2.0 language: - en - sv - de - fr - es - it - fi - bg - cs - da - el - et - hr - hu - ga - lv - lt - mt - nl - pl - pt - ro - sl - sk - ca - eu - gl - bs - ka - mk - sq - sr - tr - uk - is - 'no' library_name: tokenizers --- # Model Card for oellm-tokenizer-262k-v1 ## Model Details This is a Byte-Pair Encoding (BPE) tokenizer. - **Model Type**: BPE Tokenizer - **Vocabulary Size**: 262,144 - **Special Tokens**: `<pad>`, `<eos>`, `<bos>` - **Compatibility**: Designed for Gemma3-style models. ## Intended Uses & Limitations This tokenizer is intended for researchers and developers working on pre-training or fine-tuning language models for European languages and code. It is not a model and cannot be used for inference on its own. ## Training Data The tokenizer was trained on a **~800 GB** randomly sampled subset of a 1.2 TB from two datasets described below. The data mixture was designed to provide broad coverage of European languages and high-quality English text. The primary data sources were: - **Nemotron-CC**: High-quality English data from Common Crawl. - **HPLT v2.0**: Multilingual data from the High Performance Language Technologies project, focusing on languages prioritized by the OpenEuroLLM initiative. ## Training Procedure The tokenizer was trained on LUMI-C using a single node with 128 CPU cores and 1TB of RAM, using the Hugging Face `tokenizers` library. ## Overall Average Fertility Across All Languages tested (Lower is better) - oellm-262k-v1 1.99 - gpt-oss-20b 2.06 - gemma3-4b-it 2.18 - Teuken7B 2.52 - Llama-3-8B 2.71
tpayne52939/blockassist-bc-howling_durable_salamander_1756113635
tpayne52939
2025-08-25T09:36:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "howling durable salamander", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:36:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - howling durable salamander --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lqpl/blockassist-bc-hairy_insectivorous_antelope_1756114312
lqpl
2025-08-25T09:36:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy insectivorous antelope", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:33:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy insectivorous antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EmilRyd/gpt-oss-20b-aquarat-ground-truth-on-policy-1e5-stylized-4-50
EmilRyd
2025-08-25T09:36:18Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-25T09:31:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hatak0955/MyGemmaNPC
hatak0955
2025-08-25T09:35:44Z
16
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-20T01:38:44Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: MyGemmaNPC tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for MyGemmaNPC This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hatak0955/MyGemmaNPC", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756114477
Ferdi3425
2025-08-25T09:35:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:35:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sam12555/mirror
sam12555
2025-08-25T09:34:18Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-08-25T06:21:26Z
--- license: mit --- # MirrorMind:Personality & Emotion Analysis This Hugging Face Space deploys the MirrorMind deep learning model for analyzing neuroticism levels and emotions from video input using both visual and audio cues. ## Features - **Video-Based Analysis**: Processes both visual frames and audio from uploaded videos - **Neuroticism Assessment**: Predicts personality trait scores (0.0-1.0 scale) - **Multi-Emotion Recognition**: Identifies 6 emotional states (Anger, Disgust, Fear, Happy, Neutral, Sad) - **Multimodal Processing**: Combines facial expressions, body language, and vocal cues - **Real-time Inference**: Fast analysis using PyTorch with GPU acceleration ## How It Works 1. **Video Processing**: Extracts 8 evenly-spaced frames from uploaded video 2. **Audio Extraction**: Processes 4 seconds of audio using advanced signal processing 3. **Feature Fusion**: Combines visual and audio features using attention mechanisms 4. **Multi-task Prediction**: Simultaneously predicts neuroticism and emotion scores ## Model Architecture - **Framework**: PyTorch with ResNet-18 video backbone - **Audio Processing**: Wav2Vec2 or CNN-based audio encoder - **Model Size**: ~600MB - **Input**: Video files (MP4, AVI, MOV, WebM) - **Output**: Neuroticism scores + emotion probabilities ## Usage Guidelines ### Input Requirements: - **Video Duration**: 4-10 seconds optimal - **Resolution**: Any (automatically resized to 224x224) - **Audio**: Required for best performance - **Lighting**: Good lighting recommended - **Face Visibility**: Clear face visibility preferred ### Supported Emotions: - Anger, Disgust, Fear, Happy, Neutral, Sad ### Neuroticism Scale: - **0.0-0.3**: Low (emotionally stable) - **0.3-0.7**: Medium (moderate reactivity) - **0.7-1.0**: High (emotionally sensitive) ## 🔧 Technical Details - **Video Processing**: 8 frames extracted using temporal sampling - **Audio Processing**: 4-second segments at 16kHz sample rate - **Inference Time**: ~2-5 seconds per video - **Memory Requirements**: ~2GB RAM - **Device Support**: CPU/GPU compatible ## Model Performance The model was trained on multimodal datasets combining: - ChaLearn First Impressions (personality traits) - CREMA-D (emotion recognition) - Multi-task learning with domain adaptation ## Quick Start 1. Upload a video file (MP4, AVI, MOV, WebM) 2. Click "Analyze Video" or wait for auto-analysis 3. View neuroticism score and emotion breakdown 4. Interpret results using the provided guidelines ## Important Notes - **Research Tool**: For educational and research purposes only - **Not Diagnostic**: Should not be used for clinical diagnosis - **Privacy**: Videos are processed locally and not stored - **Accuracy**: Results may vary based on video quality and conditions ## Development Built with: - PyTorch for deep learning - Gradio for web interface - OpenCV for video processing - Librosa for audio processing - Transformers for audio encoding
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756114291
Ferdi3425
2025-08-25T09:32:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:31:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-hairy_crested_fox_1756114294
AnerYubo
2025-08-25T09:31:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy crested fox", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:31:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy crested fox --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
smithh4/blockassist-bc-plump_striped_fox_1756114250
smithh4
2025-08-25T09:31:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump striped fox", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:31:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump striped fox --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-noisy_elusive_grouse_1756114288
AnerYubo
2025-08-25T09:31:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "noisy elusive grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:31:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - noisy elusive grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-giant_leggy_rhino_1756114282
AnerYubo
2025-08-25T09:31:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "giant leggy rhino", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:31:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - giant leggy rhino --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-woolly_shaggy_mosquito_1756114277
AnerYubo
2025-08-25T09:31:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "woolly shaggy mosquito", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:31:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - woolly shaggy mosquito --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756114190
eusuf01
2025-08-25T09:30:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:30:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ricodr/blockassist-bc-twitchy_toothy_clam_1756114150
ricodr
2025-08-25T09:29:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "twitchy toothy clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:29:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - twitchy toothy clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ypszn/blockassist-bc-yapping_pawing_worm_1756114070
ypszn
2025-08-25T09:28:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping pawing worm", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:28:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping pawing worm --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756114058
eusuf01
2025-08-25T09:28:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:28:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zavodman332/blockassist-bc-sharp_aquatic_hare_1756114019
zavodman332
2025-08-25T09:27:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sharp aquatic hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:27:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sharp aquatic hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FrancescoPeriti/LlamaDictionary-it-es-sv-de-fr-en_ML38BI
FrancescoPeriti
2025-08-25T09:27:19Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T12:12:40Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-it-es-sv-de-fr-en_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **Italian, Spanish, Swedish, German, French, English**.
FrancescoPeriti/LlamaDictionary-it-es-tr-ja-el-fi-ku-pl-sv-ru-de-fr-en_ML38BI
FrancescoPeriti
2025-08-25T09:27:07Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T12:12:40Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-it-es-tr-ja-el-fi-ku-pl-sv-ru-de-fr-en_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **Italian, Spanish, Turkish, Japanese, Greek, Finnish, Kurdish, Polish, Swedish, Russian, German, French, English**.
FrancescoPeriti/LlamaDictionary-sv-de-en_ML38BI
FrancescoPeriti
2025-08-25T09:26:54Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T12:11:55Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-sv-de-en_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **Swedish, German, English**.
FrancescoPeriti/LlamaDictionary-it-es-pl-ru-fr_ML38BI
FrancescoPeriti
2025-08-25T09:26:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T12:06:53Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-it-es-pl-ru-fr_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **Italian, Spanish, Polish, Russian, French**.
klmdr22/blockassist-bc-wild_loud_newt_1756113837
klmdr22
2025-08-25T09:24:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:24:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FrancescoPeriti/LlamaDictionary-de_ML38BI
FrancescoPeriti
2025-08-25T09:24:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T10:06:27Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-de_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **German**.
0xgan/blockassist-bc-whiskered_whiskered_cat_1756113794
0xgan
2025-08-25T09:24:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whiskered whiskered cat", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:24:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whiskered whiskered cat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FrancescoPeriti/LlamaDictionary-el_ML38BI
FrancescoPeriti
2025-08-25T09:23:32Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T10:06:53Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-el_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **Greek**.
elmenbillion/blockassist-bc-beaked_sharp_otter_1756112262
elmenbillion
2025-08-25T09:23:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked sharp otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:23:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked sharp otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FrancescoPeriti/LlamaDictionary-da_ML38BI
FrancescoPeriti
2025-08-25T09:22:32Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T09:57:09Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-da_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **Danish**.
FrancescoPeriti/LlamaDictionary-la_ML38BI
FrancescoPeriti
2025-08-25T09:22:19Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T09:57:10Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-la_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **Latin**.
FrancescoPeriti/LlamaDictionary-fi_ML38BI
FrancescoPeriti
2025-08-25T09:21:14Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T09:44:31Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-fi_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **Finnish**.
efraimdahl/RhythGen
efraimdahl
2025-08-25T09:20:18Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-08-20T10:48:29Z
--- license: mit --- Selection of trained models for the RhythGen project: https://github.com/efraimdahl/RhythGen # Models • **LAS:** In-attention conditioning with syncopation labels on the Lieder dataset. • **LMR2:** Attention modulation with spectral weight profiles profiles and a learnable scale (initialized at 10) on the Lieder dataset. • **LB:** Baseline NotaGen (small) model without any conditioning, finetuned on the Lieder dataset. • **HAS2:** In-attention conditioning with syncopation labels on the RAG-RH dataset. • **RAS2:** In-attention conditioning with syncopation scores and voice masking on the RAG- collection. • **RB:** Baseline NotaGen model without any conditioning, finetuned on the RAG-collection. # Base Model All models are fintuned from [NotaGen small](https://huggingface.co/ElectricAlexis/NotaGen/blob/main/weights_notagen_pretrain_p_size_16_p_length_2048_p_layers_12_c_layers_3_h_size_768_lr_0.0002_batch_8.pth). # Datasets We fine-tuned NotaGen-small on a corpus of 1000-1500 pieces from either the [Lieder Dataset](https://github.com/OpenScore/Lieder) which is public or the [RAG Collection](https://dspace.library.uu.nl/bitstream/handle/1874/354841/OdekerkenVolkKoops2017.pdf?sequence=1) which is available on request.
mang3dd/blockassist-bc-tangled_slithering_alligator_1756112083
mang3dd
2025-08-25T09:20:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:20:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
FrancescoPeriti/LlamaDictionary-no_ML38BI
FrancescoPeriti
2025-08-25T09:19:41Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "conversational", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-07-08T09:45:12Z
--- license: cc-by-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference base_model: - meta-llama/Meta-Llama-3-8B-Instruct --- # LlamaDictionary-no_ML38BI This is part of the **DefinitionGeneration-adapters** collection. ➡️ Please see the [LlamaDictionary-it_ML38BI](https://huggingface.co/FrancescoPeriti/LlamaDictionary-it_ML38BI) for a full description, methodology, and usage details. This variant corresponds to **Norwegian**.
mohammadmahdinouri/moa-one-expert-baseline
mohammadmahdinouri
2025-08-25T09:19:40Z
0
0
transformers
[ "transformers", "safetensors", "ModernALBERT", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-08-25T09:19:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liukevin666/blockassist-bc-yawning_striped_cassowary_1756113494
liukevin666
2025-08-25T09:19:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:19:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756113488
eusuf01
2025-08-25T09:18:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:18:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ricodr/blockassist-bc-twitchy_toothy_clam_1756113435
ricodr
2025-08-25T09:18:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "twitchy toothy clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:17:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - twitchy toothy clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
klmdr22/blockassist-bc-wild_loud_newt_1756113423
klmdr22
2025-08-25T09:17:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:17:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756113378
Ferdi3425
2025-08-25T09:16:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:16:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jibrilorradv/blockassist-bc-shaggy_pale_tortoise_1756111636
jibrilorradv
2025-08-25T09:16:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shaggy pale tortoise", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:16:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shaggy pale tortoise --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ypszn/blockassist-bc-yapping_pawing_worm_1756113303
ypszn
2025-08-25T09:15:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping pawing worm", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:15:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping pawing worm --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756113296
eusuf01
2025-08-25T09:15:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:15:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
YOYO-AI/Qwen3-30B-A3B-YOYO-V2
YOYO-AI
2025-08-25T09:15:06Z
0
1
null
[ "safetensors", "qwen3_moe", "merge", "text-generation", "conversational", "en", "zh", "base_model:Qwen/Qwen3-30B-A3B-Base", "base_model:merge:Qwen/Qwen3-30B-A3B-Base", "base_model:Qwen/Qwen3-30B-A3B-Instruct-2507", "base_model:merge:Qwen/Qwen3-30B-A3B-Instruct-2507", "base_model:Qwen/Qwen3-30B-A3B-Thinking-2507", "base_model:merge:Qwen/Qwen3-30B-A3B-Thinking-2507", "base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct", "base_model:merge:Qwen/Qwen3-Coder-30B-A3B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-08-19T11:05:56Z
--- license: apache-2.0 language: - en - zh base_model: - Qwen/Qwen3-30B-A3B-Thinking-2507 - Qwen/Qwen3-30B-A3B-Instruct-2507 - Qwen/Qwen3-Coder-30B-A3B-Instruct - Qwen/Qwen3-30B-A3B-Base pipeline_tag: text-generation tags: - merge --- > *This is the initial unified version of the Qwen3-30B-A3B series models.As more fine-tuned models emerge and merging methods are applied, we will further improve it. Stay tuned!* # *Model Highlights:* - ***merge method**: `nuslerp` `della`* - ***precision**: `dtype: bfloat16`* - ***Context length**: `1010000`* # *Parameter Settings:* > [!TIP] > *`Temperature=0.7`, `TopP=0.8`, `TopK=20`,`MinP=0`.* ## *Step1: Merge Code Model with Instruction & Thinking Models Separately* - *Adopt the nuslerp method to improve model absorption rate.* - *Set a merging ratio of 9:1 to prevent capability degradation caused by an excessively high proportion of the code model.* ```yaml models: - model: Qwen/Qwen3-30B-A3B-Instruct-2507 parameters: weight: 0.9 - model: Qwen/Qwen3-Coder-30B-A3B-Instruct parameters: weight: 0.1 merge_method: nuslerp tokenizer_source: Qwen/Qwen3-30B-A3B-Instruct-2507 parameters: normalize: true int8_mask: true dtype: bfloat16 name: Qwen3-30B-A3B-Coder-Instruct-nuslerp ``` ```yaml models: - model: Qwen/Qwen3-30B-A3B-Thinking-2507 parameters: weight: 0.9 - model: Qwen/Qwen3-Coder-30B-A3B-Instruct parameters: weight: 0.1 merge_method: nuslerp tokenizer_source: Qwen/Qwen3-30B-A3B-Thinking-2507 parameters: normalize: true int8_mask: true dtype: bfloat16 name: Qwen3-30B-A3B-Coder-Thinking-nuslerp ``` ## *Step2: Merge Code Instruction & Code Thinking Models into Base Model Together* - *Merge the two models into the base model using the della merging method to make the model more versatile and stable.* - *Since the merged model is more similar to the instruction model, we use the chat template of the Qwen3-30B-A3B-Instruct-2507.* ```yaml models: - model: Qwen3-30B-A3B-Coder-Instruct-nuslerp parameters: density: 1 weight: 1 lambda: 0.9 - model: Qwen3-30B-A3B-Coder-Thinking-nuslerp parameters: density: 1 weight: 1 lambda: 0.9 merge_method: della base_model: Qwen/Qwen3-30B-A3B-Base dtype: bfloat16 name: Qwen3-30B-A3B-YOYO-V2 ``` ## *Step3: Further Extend Context Length* - *By referring to the config_1m.json of Qwen3-30B-A3B-Instruct-2507, we modified the config.json of the merged model and extended the maximum context length to 1M.*
olyabernatski/clip-ViT-B-32-128dim-2
olyabernatski
2025-08-25T09:14:17Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "clip", "feature-extraction", "sentence-similarity", "arxiv:2103.00020", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-25T09:13:08Z
--- library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity pipeline_tag: sentence-similarity --- # clip-ViT-B-32 This is the Image & Text model [CLIP](https://arxiv.org/abs/2103.00020), which maps text and images to a shared vector space. For applications of the models, have a look in our documentation [SBERT.net - Image Search](https://www.sbert.net/examples/applications/image-search/README.html) ## Usage After installing [sentence-transformers](https://sbert.net) (`pip install sentence-transformers`), the usage of this model is easy: ```python from sentence_transformers import SentenceTransformer, util from PIL import Image #Load CLIP model model = SentenceTransformer('clip-ViT-B-32') #Encode an image: img_emb = model.encode(Image.open('two_dogs_in_snow.jpg')) #Encode text descriptions text_emb = model.encode(['Two dogs in the snow', 'A cat on a table', 'A picture of London at night']) #Compute cosine similarities cos_scores = util.cos_sim(img_emb, text_emb) print(cos_scores) ``` See our [SBERT.net - Image Search](https://www.sbert.net/examples/applications/image-search/README.html) documentation for more examples how the model can be used for image search, zero-shot image classification, image clustering and image deduplication. ## Performance In the following table we find the zero-shot ImageNet validation set accuracy: | Model | Top 1 Performance | | --- | :---: | | [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 63.3 | | [clip-ViT-B-16](https://huggingface.co/sentence-transformers/clip-ViT-B-16) | 68.1 | | [clip-ViT-L-14](https://huggingface.co/sentence-transformers/clip-ViT-L-14) | 75.4 | For a multilingual version of the CLIP model for 50+ languages have a look at: [clip-ViT-B-32-multilingual-v1](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1)
eusuf01/blockassist-bc-smooth_humming_butterfly_1756113221
eusuf01
2025-08-25T09:14:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:14:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
danieltcowleyh1/blockassist-bc-peaceful_darting_newt_1756111687
danieltcowleyh1
2025-08-25T09:13:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful darting newt", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:13:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful darting newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sunrunner79hot1/blockassist-bc-bold_noisy_woodpecker_1756111736
sunrunner79hot1
2025-08-25T09:13:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bold noisy woodpecker", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:13:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bold noisy woodpecker --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
motza0025/blockassist-bc-mangy_flapping_starfish_1756111554
motza0025
2025-08-25T09:12:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mangy flapping starfish", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:12:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mangy flapping starfish --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tapatihui332/blockassist-bc-hibernating_restless_macaque_1756113071
tapatihui332
2025-08-25T09:12:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hibernating restless macaque", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:12:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hibernating restless macaque --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hoan17/saving_LOVv2e500s50_150
hoan17
2025-08-25T09:12:21Z
0
0
diffusers
[ "diffusers", "safetensors", "trl", "o2o", "reinforcement-learning", "text-to-image", "stable-diffusion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-08-25T09:11:54Z
--- license: apache-2.0 tags: - trl - o2o - diffusers - reinforcement-learning - text-to-image - stable-diffusion --- # TRL O2O Model This is a diffusion model that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756111481
kojeklollipop
2025-08-25T09:12:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:12:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tlano/il-test
tlano
2025-08-25T09:11:55Z
0
0
null
[ "license:other", "region:us" ]
null
2025-08-25T08:27:20Z
--- license: other license_name: il license_link: https://freedevproject.org/faipl-1.0-sd/ ---
eusuf01/blockassist-bc-smooth_humming_butterfly_1756113083
eusuf01
2025-08-25T09:11:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:11:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756113070
Ferdi3425
2025-08-25T09:11:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:11:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
swardiantara/ADFLER-xlnet-base-cased
swardiantara
2025-08-25T09:11:24Z
7
0
null
[ "pytorch", "safetensors", "xlnet", "token-classification", "en", "base_model:xlnet/xlnet-base-cased", "base_model:finetune:xlnet/xlnet-base-cased", "license:mit", "region:us" ]
token-classification
2024-11-14T11:46:29Z
--- license: mit language: - en base_model: - xlnet/xlnet-base-cased pipeline_tag: token-classification ---
indoempatnol/blockassist-bc-fishy_wary_swan_1756111430
indoempatnol
2025-08-25T09:10:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:10:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zavodman332/blockassist-bc-sharp_aquatic_hare_1756113013
zavodman332
2025-08-25T09:10:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sharp aquatic hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:10:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sharp aquatic hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Obiwank107/blockassist-bc-tame_foxy_aardvark_1756108205
Obiwank107
2025-08-25T09:09:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tame foxy aardvark", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:09:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tame foxy aardvark --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756112909
Ferdi3425
2025-08-25T09:08:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:08:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eason668/2b509713-486e-488a-bf91-393179e986f5
eason668
2025-08-25T09:08:18Z
36
0
peft
[ "peft", "safetensors", "qwen2", "text-generation", "axolotl", "base_model:adapter:Qwen/Qwen2.5-1.5B", "lora", "transformers", "conversational", "base_model:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-24T11:40:48Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B tags: - axolotl - base_model:adapter:Qwen/Qwen2.5-1.5B - lora - transformers pipeline_tag: text-generation model-index: - name: 2b509713-486e-488a-bf91-393179e986f5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.13.0.dev0` ```yaml adapter: lora base_model: Qwen/Qwen2.5-1.5B bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f480d36acec9bc4e_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruct field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: eason668/2b509713-486e-488a-bf91-393179e986f5 learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 100 micro_batch_size: 8 mlflow_experiment_name: /tmp/f480d36acec9bc4e_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_max_length: 2048 tokenizer_truncation: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.1 wandb_entity: null wandb_mode: online wandb_project: Gradients-On-Demand wandb_run: 2b509713-486e-488a-bf91-393179e986f5 wandb_runid: 2b509713-486e-488a-bf91-393179e986f5 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 2b509713-486e-488a-bf91-393179e986f5 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7391 - Memory/max Mem Active(gib): 10.49 - Memory/max Mem Allocated(gib): 10.49 - Memory/device Mem Reserved(gib): 12.74 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 1024 - total_eval_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mem Active(gib) | Mem Allocated(gib) | Mem Reserved(gib) | |:-------------:|:------:|:----:|:---------------:|:---------------:|:------------------:|:-----------------:| | No log | 0 | 0 | 1.1335 | 8.89 | 8.89 | 9.42 | | 0.9481 | 0.0280 | 13 | 0.9827 | 10.49 | 10.49 | 11.72 | | 0.84 | 0.0561 | 26 | 0.7955 | 10.49 | 10.49 | 12.74 | | 0.7109 | 0.0841 | 39 | 0.7662 | 10.49 | 10.49 | 12.74 | | 0.7087 | 0.1121 | 52 | 0.7523 | 10.49 | 10.49 | 12.74 | | 0.7001 | 0.1401 | 65 | 0.7443 | 10.49 | 10.49 | 12.74 | | 0.7474 | 0.1682 | 78 | 0.7404 | 10.49 | 10.49 | 12.74 | | 0.7315 | 0.1962 | 91 | 0.7391 | 10.49 | 10.49 | 12.74 | ### Framework versions - PEFT 0.17.0 - Transformers 4.55.2 - Pytorch 2.7.1+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
Arpiitt/blockassist-bc-tiny_fluffy_newt_1756112769
Arpiitt
2025-08-25T09:07:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tiny fluffy newt", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:07:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tiny fluffy newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Aleksandr-n/blockassist-bc-agile_endangered_gazelle_1756110845
Aleksandr-n
2025-08-25T09:06:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "agile endangered gazelle", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:06:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - agile endangered gazelle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756112767
eusuf01
2025-08-25T09:06:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:06:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
DurstewitzLab/dynamix-3d
DurstewitzLab
2025-08-25T09:04:28Z
9
1
null
[ "dynamix", "time-series-forecasting", "dataset:williamgilpin/dysts", "arxiv:2505.13192", "license:mit", "region:us" ]
time-series-forecasting
2025-08-19T13:37:35Z
--- license: mit pipeline_tag: time-series-forecasting datasets: - williamgilpin/dysts --- # DynaMix-3D [![arXiv](https://img.shields.io/badge/arXiv-2505.13192-b31b1b.svg)](https://arxiv.org/abs/2505.13192) DynaMix is a foundation model for zero-shot inference of dynamical systems that preserves long-term statistics. Unlike traditional approaches that require retraining for each new system, DynaMix generalizes across dynamical systems by learning universal representations that capture the underlying patterns governing temporal evolution. - **Accurate Zero-Shot DSR**: DynaMix generalizes across diverse dynamical systems without fine-tuning, accurately capturing attractor geometry and long-term statistics. - **Context Felxible Dynamics Modeling**: The multivariate architecture captures dependencies across system dimensions and adapts flexibly to different dimensionalities (for this model up to 3) and context lengths. - **Efficient and Lightweight**: Designed to be efficient with ~10K parameters, DynaMix can run on CPU for inference, enabling orders-of-magnitude faster inference than traditional foundation models. - **Interpretable Dynamics**: Provides insights into the structure of reconstructed systems, revealing similarities across different dynamical systems. - **General Time Series Forecasting**: Extends beyond DSR to general time series forecasting using adaptable embedding techniques. For complete documentation and code, visit the [DynaMix repository](https://github.com/yourusername/zero-shot-DSR). ## Model Description DynaMix is based on a sparse mixture of experts (MoE) architecture operating in latent space: 1. **Expert Networks**: Each expert is a specialized dynamical model, given through Almost-Linear Recurrent Neural Networks 2. **Gating Network**: Selects experts based on the provided context and current latent representation of the dynamics By aggregating the expert weighting with the expert prediction the next state is predicted. The current model has the following specifics: - **M (Latent state dimension):** 30 - **N (Observation space dimension):** 3 - **Experts:** 10 expert networks in the mixture - **Expert type:** `"almost_linear_rnn"` — a compact recurrent model combining linear and nonlinear components (`P=2` ReLU units) - **Probabilistic expert:** `False` (deterministic outputs; probabilistic Gaussian outputs optional) ## Usage To load the model in python using the corresponding codebase [DynaMix repository](https://github.com/yourusername/zero-shot-DSR), use: ```python from src.model.dynamix import DynaMix from huggingface_hub import hf_hub_download from safetensors.torch import load_file # Initialize model with architecture model = DynaMix(M=M, N=N, Experts=EXPERTS, expert_type=EXPERT_TYPE, P=P) # Load model weights model_path = hf_hub_download( repo_id="DurstewitzLab/dynamix-3d", filename="dynamix-3d-base-v1.0.safetensors" ) model_state_dict = load_file(model_path) model.load_state_dict(model_state_dict) # Set model to evaluation mode model.eval() ``` Given context data from the target system with shape (`T_C`, `S`, `N`) (where `T_C` is the context length, `S` the number of sequences that should get processed and `N` the data dimensionality), generate forecasts by passing the data through the `DynaMixForecaster` along with the loaded model. Further details can be found in the GitHub repository [DynaMix repository](https://github.com/yourusername/zero-shot-DSR). ## Citation If you use DynaMix in your research, please cite our paper: ``` @misc{hemmer2025truezeroshotinferencedynamical, title={True Zero-Shot Inference of Dynamical Systems Preserving Long-Term Statistics}, author={Christoph Jürgen Hemmer and Daniel Durstewitz}, year={2025}, eprint={2505.13192}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2505.13192}, } ```
Mostefa-Terbeche/diabetic-retinopathy-messidor-efficientnet_b3-advanced-20250723-150833
Mostefa-Terbeche
2025-08-25T09:04:11Z
0
0
null
[ "diabetic-retinopathy", "medical-imaging", "pytorch", "computer-vision", "retinal-imaging", "dataset:messidor", "license:apache-2.0", "model-index", "region:us" ]
null
2025-08-25T08:41:44Z
--- license: apache-2.0 tags: - diabetic-retinopathy - medical-imaging - pytorch - computer-vision - retinal-imaging datasets: - messidor metrics: - accuracy - quadratic-kappa - auc model-index: - name: messidor_efficientnet_b3_advanced results: - task: type: image-classification name: Diabetic Retinopathy Classification dataset: type: messidor name: MESSIDOR metrics: - type: accuracy value: 0.27011494252873564 - type: quadratic-kappa value: 0.5306669415524172 --- # Diabetic Retinopathy Classification Model ## Model Description This model is trained for diabetic retinopathy classification using the efficientnet_b3 architecture on the messidor dataset with advanced preprocessing. ## Model Details - **Architecture**: efficientnet_b3 - **Dataset**: messidor - **Preprocessing**: advanced - **Training Date**: 20250723-150833 - **Task**: 5-class diabetic retinopathy grading (0-4) - **Directory**: messidor_efficientnet_b3_20250723-150833_new ## Performance - **Test Accuracy**: 0.27011494252873564 - **Test Quadratic Kappa**: 0.5306669415524172 - **Validation Kappa**: 0.5306669415524172 ## Usage ```python import torch from huggingface_hub import hf_hub_download # Download model model_path = hf_hub_download( repo_id="your-username/diabetic-retinopathy-messidor-efficientnet_b3-advanced", filename="model_best.pt" ) # Load model model = torch.load(model_path, map_location='cpu') ``` ## Classes - 0: No DR (No diabetic retinopathy) - 1: Mild DR (Mild non-proliferative diabetic retinopathy) - 2: Moderate DR (Moderate non-proliferative diabetic retinopathy) - 3: Severe DR (Severe non-proliferative diabetic retinopathy) - 4: Proliferative DR (Proliferative diabetic retinopathy) ## Citation If you use this model, please cite your research paper/thesis.
eusuf01/blockassist-bc-smooth_humming_butterfly_1756112620
eusuf01
2025-08-25T09:04:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:04:04Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1756111110
helmutsukocok
2025-08-25T09:02:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:02:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Harishwar112/Qwen2.5-1.5b-Finetined-LT
Harishwar112
2025-08-25T09:02:58Z
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-25T09:02:08Z
--- base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Harishwar112 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756112471
Ferdi3425
2025-08-25T09:01:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T09:01:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jo-mengr/mmcontext-pubmedbert-100k-v3
jo-mengr
2025-08-25T08:58:57Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:81143", "loss:MultipleNegativesRankingLoss", "code", "dataset:jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:NeuML/pubmedbert-base-embeddings", "base_model:finetune:NeuML/pubmedbert-base-embeddings", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-25T08:58:39Z
--- language: - code tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:81143 - loss:MultipleNegativesRankingLoss base_model: NeuML/pubmedbert-base-embeddings widget: - source_sentence: EEF1A1 FTL CD74 MALAT1 TPT1 ACTB TMSB10 WARS1 HSPA8 LCP1 EIF1 PTMA HSP90AB1 RBM3 FAU TYMP VIM GSTP1 CALR RACK1 TMSB4X HSP90AA1 HSPA5 SYNGR2 STAT1 FTH1 IRF1 PPDPF BTF3 LAPTM5 HSP90B1 GDI2 WDR1 CORO1A ATP5F1E TMBIM6 HINT1 NACA HERPUD1 MYL6 GADD45B PGK1 DDX5 GAPDH MOB1A ACTR3 CSDE1 EIF4B PABPC1 RBX1 ATP5F1B ARPC3 PRDX1 NCOA3 PRDX5 RAN ACTR2 SNRPG SNRNP200 ALDOA ATP5F1A YWHAZ PPP1CA TALDO1 sentences: - This measurement was conducted with 10x 5' v1. Naive B cell from blood of a 26-year old male, activated with CD3. - EEF1A1 MALAT1 TMSB4X NACA TPT1 PABPC1 FAU PTMA FTL FTH1 NPM1 HSPA5 LDHB COX4I1 LCP1 SH3BGRL3 EEF2 EIF1 RACK1 GSTP1 SMCHD1 ELOB DDX5 GAPDH GTF3A BTF3 HNRNPU TAGLN2 RNF149 SSR2 YWHAB HNRNPF AKAP13 CALR OST4 MYCBP2 IL32 VIM TMSB10 GABARAPL2 THRAP3 ARID4B EIF4B TRAM1 HSP90AA1 ERP29 FXYD5 EZR RAB18 EIF3L MYH9 EIF3E PDCD5 RABAC1 FKBP8 CHCHD2 DOCK8 HDLBP SRSF7 TMED5 MYL12B TRIR NCOA3 EIF2S2 - This measurement was conducted with 10x 5' v1. A 26-year-old male individual's blood sample, containing naive thymus-derived CD4-positive, alpha-beta T cells, with no activation or treatment, and in G1 phase. - source_sentence: MALAT1 EEF1A1 TPT1 PTMA ACTB TMSB4X H3-3B FTL FTH1 TMSB10 LGALS1 VIM CYBA FAU EIF1 NACA RACK1 UBA52 HSP90AA1 CD63 SH3BGRL3 LMO4 HMGB1 S100A4 UBC HNRNPU HSP90AB1 DDX5 DUSP1 HNRNPA2B1 SOX4 JUND DBI S100A6 GSTP1 MYL6 PFN1 GAPDH SRGN SERF2 TAGLN2 IER2 UBB CFL1 JUN YBX1 PABPC1 OAZ1 ARPC3 CCNI DAD1 BTG1 ATP5MC2 BTF3 ZFP36L2 TSC22D3 EEF2 FOS IFITM2 PPIA KLF6 GNAS DYNLL1 MYL12A sentences: - This measurement was conducted with 10x 3' v3. Blasts cells derived from the blood of a 4-month old male. - MALAT1 TPT1 HSP90B1 SSR4 SUB1 EEF1A1 SAT1 XBP1 SPCS1 ITM2C PPIB SEC61B TMBIM6 SEC61G CYBA FAU UBC NACA SELENOS TMSB10 SEC11C UBE2J1 CALR TXNIP HSPA5 ACTB SELENOK SPCS2 RRBP1 UBA52 H3-3B SERF2 FTH1 EIF1 SEC62 NUCB2 SSR2 VIM ERLEC1 MYL6 SRGN ATP5F1E PTMA NEAT1 TRAM1 GNAS KLF6 LMAN1 MYDGF TMEM59 IFI6 ARPC2 H1-10 CD74 HERPUD1 HSP90AA1 OAZ1 GAPDH SSR3 CCNI SPCS3 COX4I1 ITM2B TXN - This measurement was conducted with 10x 3' v3. This is a megakaryocyte-erythroid progenitor cell (MEP-like) derived from a 1-month-old female patient with KMT2A-rearranged (KMT2A-r) infant acute lymphoblastic leukemia (ALL). The cell exhibits increased lineage plasticity, downregulated steroid response pathways, and belongs to a hematopoietic stem and progenitor-like (HSPC-like) population that forms an immunosuppressive signaling circuit with cytotoxic lymphocytes. - source_sentence: MALAT1 NRXN3 NRXN1 DPP10 ADARB2 IL1RAPL1 CADM2 MEIS2 ROBO2 PBX1 SOX2-OT CELF2 RALYL DPP6 PBX3 KALRN SLC8A1 MEG3 EPHA6 GRIK1 ERBB4 SPOCK3 ENOX1 ZNF385D KCND2 ADCY2 PDE4D GRIP1 TCF4 SNTG1 NTRK2 PCDH9 NKAIN2 MAML3 DAB1 GRIA1 LRFN5 NTM EPHA5 ANK3 LSAMP MAP2 DCLK1 TRPM3 KIRREL3 PPM1E KCNQ5 SIPA1L1 CHSY3 RORA CACNA2D1 CADPS TAFA2 KLHL29 TIAM1 GABRB3 ROBO1 FUT9 ATRNL1 FGF13 NCAM1 AKAP6 L3MBTL4 IL1RAPL2 sentences: - MALAT1 PLP1 PCDH9 IL1RAPL1 PTPRD QKI ST18 MAN2A1 KIRREL3 PDE4B GPM6B ANK3 EDIL3 MOB3B MBP PHLPP1 MAP7 TMEFF2 PPP2R2B ZEB2 MAML3 SLC44A1 PTGDS FOXP1 DOCK4 SLC24A2 MAP4K5 SGK1 APP DOCK5 ELMO1 FMNL2 SIK3 FRMD5 SHTN1 GRM3 LINC00609 PICALM APLP1 DNAJC6 MACF1 TMEM165 EXOC6B HHIP YPEL2 CTNNA3 SOX2-OT DBNDD2 CD22 VRK2 FUT8 PLEKHH1 ANKRD44 SLCO1A2 COBL ARHGAP21 CCDC88A FOXO3 ATP8A1 HIP1 ENPP2 PPM1B SECISBP2L IGF1R - This measurement was conducted with 10x 3' v3. Neuron cell type from a 50-year-old male human cerebral cortex, specifically from the Cingulate gyrus, rostral (CgGr), Ventral division of MFC - A24 region, with European self-reported ethnicity, analyzed at the nucleus level. - This measurement was conducted with 10x 3' v3. Neuron cell type from a 50-year-old male human cerebral cortex, specifically the rostral cingulate gyrus, ventral division of MFC, A24, with European ethnicity. - source_sentence: MALAT1 TPT1 SSR4 HSP90AA1 EEF1A1 JUN KLF6 FTL FOS BTG2 SAT1 JUNB PPIB CD74 XBP1 DUSP1 SEC11C RGCC UBC SERF2 HSP90B1 HERPUD1 FAU TSC22D3 CYBA HM13 SERP1 NEAT1 CD38 TMBIM6 RPN1 PSAP OST4 TMSB10 LMAN1 SEC61B RRBP1 DNAJB1 RHOB EIF1 UBE2J1 HSPA5 SSR3 KLF2 P4HB MYDGF SPCS2 ITM2C UBB TMED9 SEL1L SUB1 SPCS1 SEC61G MCL1 FTH1 CALR RABAC1 COX7A2 NCL RAB30 PABPC1 SEL1L3 KDELR1 sentences: - This measurement was conducted with 10x 5' v2. Memory B cell derived from a 65-79 year-old male, taken from the mesenteric lymph node. - This measurement was conducted with 10x 5' v2. IgA plasma cell sample taken from the mesenteric lymph node of a 65-79 year-old female. - EEF1A1 TPT1 TMSB4X ACTB MALAT1 FOS DUSP1 KLF2 FAU JUNB PTMA TMSB10 DNAJB1 FTL JUN NACA FTH1 TSC22D3 EIF1 PFN1 HSPA8 LDHB H3-3B BTG1 ZFP36L2 NPM1 IL32 VIM PABPC1 CORO1A COX4I1 BTF3 UBC DUSP2 EEF2 EEF1G ARHGEF1 HSP90AB1 CIRBP MYL12A NR4A1 ZFP36 ANXA1 ITM2B NOSIP PNRC1 UQCRB BTG2 LAPTM5 PCBP1 COMMD6 S100A4 PPIA UBA52 CD44 FAM107B YBX1 HSP90AA1 GAPDH HSPE1 SRSF7 SERP1 CXCR4 PPDPF - source_sentence: EEF1A1 TMSB4X MALAT1 CD74 H3-3B FAU TPT1 ACTB FTH1 PTMA EIF1 ZFP36L1 UBA52 NPM1 PPIA HSP90AA1 RGS2 SAT1 TSC22D3 EEF2 HMGB1 GRN STK17B COTL1 EDF1 CD83 PRDX1 ZFP36 COX4I1 ANP32B EML4 TAF1D UQCRH NACA RACK1 ENO1 RBM3 PFN1 PARK7 IRF1 SNRPD2 SNRPB COX7C KLF2 ATP6V1F ZNF331 BTF3 EIF3H HNRNPDL UQCRB EIF4A2 TAGLN2 ARPC2 YWHAB SF1 EIF3F ZFAS1 H4C3 TMSB10 HERPUD1 SLC2A3 WNK1 MEF2A ARHGAP15 sentences: - MALAT1 CD74 EEF1A1 ACTB EIF1 TMSB4X PTMA TSC22D3 FTL TPT1 BTG1 FTH1 UBC TMSB10 KLF6 FAU PNRC1 HSP90AB1 CD83 LAPTM5 JUN NACA RACK1 HLA-DRB5 DDX5 KLF2 IRF8 GPR183 TXNIP PPP1R15A NFKBIA YPEL5 H3-3B ZFP36L2 YWHAZ UBA52 CYBA OAZ1 DUSP1 SARAF RHOA MYL12A COTL1 PFN1 HSPA8 MCL1 TAGLN2 TUBA1A CALM1 HMGN2 BCLAF1 PABPC1 HSP90AA1 SMAP2 EZR ARPC3 ACTR3 EPC1 CXCR4 SEPTIN7 ZFP36 SNX9 EEF2 FOS - This measurement was conducted with 10x 5' v1. Memory B cell derived from a 3-year-old male human tonsil tissue, expressing IGHJ4*02, IGHV4-59*01, IGKV3-20, IGKJ2, and IgG1 isotype. - This measurement was conducted with 10x 5' v1. Plasmablast cell sample from a 3-year-old male, taken from the tonsil tissue, expressing IgM isotype, with IGH_IN_FRAME, IGH_FUNCTIONAL, IGH_JUNCTION_LENGTH 48.0, IGH_J_CALL IGHJ3*02, IGH_V_CALL_GENOTYPED IGHV4-39*01, IGK_C_Gene IGKC, IGK_FullLength 2, IGK_Productive 2, IGK_VDJ_Gene IGKV3-20 None IGKJ1. datasets: - jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy model-index: - name: SentenceTransformer based on NeuML/pubmedbert-base-embeddings results: - task: type: triplet name: Triplet dataset: name: cellxgene pseudo bulk 100k multiplets natural language annotation cell sentence 2 type: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2 metrics: - type: cosine_accuracy value: 0.4768616259098053 name: Cosine Accuracy --- # SentenceTransformer based on NeuML/pubmedbert-base-embeddings This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) on the [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) <!-- at revision d6eaca8254bc229f3ca42749a5510ae287eb3486 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) - **Language:** code <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): MMContextEncoder( (text_encoder): BertModel( (embeddings): BertEmbeddings( (word_embeddings): Embedding(30522, 768, padding_idx=0) (position_embeddings): Embedding(512, 768) (token_type_embeddings): Embedding(2, 768) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): BertEncoder( (layer): ModuleList( (0-11): 12 x BertLayer( (attention): BertAttention( (self): BertSdpaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) (intermediate_act_fn): GELUActivation() ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (pooling): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-100k-v3") # Run inference sentences = [ 'EEF1A1 TMSB4X MALAT1 CD74 H3-3B FAU TPT1 ACTB FTH1 PTMA EIF1 ZFP36L1 UBA52 NPM1 PPIA HSP90AA1 RGS2 SAT1 TSC22D3 EEF2 HMGB1 GRN STK17B COTL1 EDF1 CD83 PRDX1 ZFP36 COX4I1 ANP32B EML4 TAF1D UQCRH NACA RACK1 ENO1 RBM3 PFN1 PARK7 IRF1 SNRPD2 SNRPB COX7C KLF2 ATP6V1F ZNF331 BTF3 EIF3H HNRNPDL UQCRB EIF4A2 TAGLN2 ARPC2 YWHAB SF1 EIF3F ZFAS1 H4C3 TMSB10 HERPUD1 SLC2A3 WNK1 MEF2A ARHGAP15', "This measurement was conducted with 10x 5' v1. Plasmablast cell sample from a 3-year-old male, taken from the tonsil tissue, expressing IgM isotype, with IGH_IN_FRAME, IGH_FUNCTIONAL, IGH_JUNCTION_LENGTH 48.0, IGH_J_CALL IGHJ3*02, IGH_V_CALL_GENOTYPED IGHV4-39*01, IGK_C_Gene IGKC, IGK_FullLength 2, IGK_Productive 2, IGK_VDJ_Gene IGKV3-20 None IGKJ1.", "This measurement was conducted with 10x 5' v1. Memory B cell derived from a 3-year-old male human tonsil tissue, expressing IGHJ4*02, IGHV4-59*01, IGKV3-20, IGKJ2, and IgG1 isotype.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 1.0000, 1.0000], # [1.0000, 1.0000, 1.0000], # [1.0000, 1.0000, 1.0000]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.4769** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation * Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [7041d95](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/7041d95e2a15ee7135001c9a0df35c22a45ea4ea) * Size: 81,143 training samples * Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative_1 | negative_2 | |:--------|:--------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------| | type | string | string | string | string | | details | <ul><li>min: 356 characters</li><li>mean: 385.24 characters</li><li>max: 450 characters</li></ul> | <ul><li>min: 92 characters</li><li>mean: 216.13 characters</li><li>max: 900 characters</li></ul> | <ul><li>min: 101 characters</li><li>mean: 215.14 characters</li><li>max: 870 characters</li></ul> | <ul><li>min: 338 characters</li><li>mean: 384.66 characters</li><li>max: 433 characters</li></ul> | * Samples: | anchor | positive | negative_1 | negative_2 | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>TMSB4X TMSB10 ACTB MALAT1 GNLY NKG7 IFITM2 LGALS1 GZMA EEF1A1 PFN1 HMGB2 FTH1 PTMA HSP90AA1 GZMB ARHGDIB HNRNPA2B1 PLAAT4 FAU CMC1 VIM MYL12A CBX3 ATP5F1E HCST IFI44L KLRF1 H3-3A COX6C ARL6IP1 CFL1 ISG15 HMGB1 S100A4 ATP5MF RORA MYL6 CORO1A OAZ1 KLRB1 ID2 HMGN3 CCNI RBM39 CAP1 SERF2 ELOC FCER1G S100A9 IFI16 YWHAZ EIF1 CALR HMGN2 SKAP2 SLC25A5 ZZZ3 YBX1 NUCB2 CDC42 GSTP1 FTL ATP5F1D</code> | <code>This measurement was conducted with 10x 3' v2. A proliferating lymphocyte cell sample, obtained from a 34-year-old female Asian individual, derived from peripheral blood mononuclear cells.</code> | <code>This measurement was conducted with 10x 3' v2. Sample is a 25-year-old female with European ethnicity, having CD8-positive, alpha-beta T cell type. This cell type exhibits elevated expression of type 1 interferon-stimulated genes (ISGs) in monocytes, reduction of naïve CD4+ T cells correlating with monocyte ISG expression, and expansion of repertoire-restricted cytotoxic GZMH+ CD8+ T cells.</code> | <code>MALAT1 TMSB4X EEF1A1 CD74 BTG1 PTMA TMSB10 TPT1 FAU EIF1 FTH1 FTL CXCR4 TSC22D3 DUSP1 UBA52 ACTB CD37 CD52 NACA RACK1 EZR CD69 LAPTM5 H3-3A FOS ISG20 YBX1 CIRBP EIF3E OAZ1 COX7C SAT1 COX4I1 H3-3B SH3BGRL3 UBC UBB JUNB COMMD6 VIM CYBA KLF6 STK17B FUS HNRNPC MYL6 GADD45B LGALS1 EIF3L SRSF5 NFKBIA ANKRD12 CORO1A TLE5 NOP53 CHCHD2 PFN1 DDX5 ARPC3 COX7A2 YPEL5 ARL4A SRGN</code> | | <code>EEF1A1 MALAT1 FTH1 JUNB TPT1 FOS TMSB10 BTG1 TMSB4X ZFP36L2 NACA PABPC1 ACTB FAU VIM H3-3B EIF1 ZFP36 SARAF PTMA IL7R JUN RACK1 EEF2 UBA52 GAPDH FTL FXYD5 DUSP1 S100A4 CD69 CXCR4 UBC TSC22D3 CFL1 KLF6 ARHGDIB KLF2 BTG2 CITED2 IER2 TUBB4B CD3E EEF1G SLC2A3 NFKBIA PFN1 SRGN SNX9 COX4I1 DNAJB1 SERF2 CD8A PCBP2 IL32 BIRC3 SMAP2 FUS GADD45B MYL12A OAZ1 ATP5F1E TUBA4A PNRC1</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a cell from the omentum tissue, specifically an effector memory CD4-positive, alpha-beta T cell, from a female in her sixth decade.</code> | <code>This measurement was conducted with 10x 5' v2. Conventional dendritic cell from the jejunal epithelium of a female in her eighth decade.</code> | <code>CD74 MALAT1 EEF1A1 FOS TPT1 TMSB4X TMSB10 ACTB FAU JUN CD37 DUSP1 RACK1 JUNB EIF1 PTMA FTL DNAJB1 H3-3B CD52 NACA BTG1 TSC22D3 FTH1 PABPC1 EEF2 UBA52 EEF1G HSP90AA1 LAPTM5 CYBA PPP1R15A HSP90AB1 CD69 ARHGDIB ZFP36 SERF2 UBC H3-3A PCBP2 HLA-DRB5 KLF6 PFN1 DDX5 HSPA8 ARPC3 CD83 CCNI CXCR4 ATP5F1E SARAF TUBA1A ZFP36L1 TOMM7 HERPUD1 YBX1 RHOA MEF2C FXYD5 MYL6 SRSF5 MYL12A CORO1A OAZ1</code> | | <code>MALAT1 GRIK1 SYT1 PCDH9 RORA NRG1 CADPS ZFPM2 LRRC4C LINGO2 RALYL PTPRD SPHKAP CNTNAP5 SLC8A1 CCSER1 HDAC9 CELF2 R3HDM1 CNTN4 RBMS3 PCDH7 GALNT13 UNC5D ROBO1 SYNPR SNAP25 GPM6A ANK3 FRMPD4 CHRM2 RYR2 KHDRBS2 CADM1 CACNA1D RGS6 PDE4D DOCK4 UNC13C CDH18 FAT3 MEG3 NR2F2-AS1 HMCN1 GULP1 CAMK2D ZEB1 SYN2 DYNC1I1 OXR1 DPP10 OSBPL6 FRAS1 PPP3CA ZNF385D ZMAT4 PCBP3 HS6ST3 ERC2 PLEKHA5 CDK14 MAP2 NCOA1 ATP8A2</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG).</code> | <code>This measurement was conducted with 10x 3' v3. Neuron from the thalamic complex (thalamus, posterior nuclear complex of thalamus, medial geniculate nuclei) of a 42-year-old male, identified as a midbrain-derived inhibitory neuron.</code> | <code>MALAT1 PCDH9 PTPRD NRG1 SYT1 DPP10 ROBO1 TENM2 LRRC4C RBMS3 CNTNAP5 LINGO2 CDH18 SLC8A1 DMD PDE4D RYR2 ATP1B1 RGS6 PTPRT CHRM3 ADGRL2 NOVA1 NTNG1 PCDH7 TAFA2 CCSER1 ANK3 MEG3 MAP2 PLCB4 CACNA2D1 PRKG1 LINC03000 RMST RORA FOXP2 LHFPL3 MEG8 TNRC6A DAB1 KCTD8 RALYL GNAS INPP4B OLFM3 CNTN4 FRMD4A LINC00632 GAPDH ENOX1 AHI1 GPM6A EBF1 LRFN5 PCSK1N SEMA5A KIAA1217 CALY MAP1B SNAP25 GABRB2 CDH8 GRIP1</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation * Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [7041d95](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/7041d95e2a15ee7135001c9a0df35c22a45ea4ea) * Size: 9,011 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative_1 | negative_2 | |:--------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------| | type | string | string | string | string | | details | <ul><li>min: 347 characters</li><li>mean: 386.7 characters</li><li>max: 437 characters</li></ul> | <ul><li>min: 99 characters</li><li>mean: 209.99 characters</li><li>max: 941 characters</li></ul> | <ul><li>min: 102 characters</li><li>mean: 213.87 characters</li><li>max: 981 characters</li></ul> | <ul><li>min: 347 characters</li><li>mean: 386.42 characters</li><li>max: 433 characters</li></ul> | * Samples: | anchor | positive | negative_1 | negative_2 | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>MALAT1 EEF1A1 FTH1 TMSB4X ACTB FTL RTN4 ATP6V0B TPT1 FAU S100A6 NDUFA4 ATP5F1E COX7C ITM2B IGFBP7 EIF1 C12orf75 CD9 COX7B SERF2 ATP1B1 COX8A TXNIP NDUFB2 MYL6 PPDPF COX6B1 UQCR11 APOE COX4I1 CALM2 UQCRB S100A11 UQCRQ COX6C ATP5MG BSG ATP6AP2 UQCR10 PTMA NACA UBL5 UBA52 TMSB10 ADGRF5 HSP90AA1 GSTP1 ATP5F1D CHCHD2 GAPDH COX7A2 SKP1 HSPE1 PRDX1 CYSTM1 LGALS3 CD63 ATP5MJ CKB NDUFS5 ATP5ME UBB MAL</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 43-year-old male of European ethnicity with a reported history of kidney cancer. The cell type is identified as a kidney collecting duct intercalated cell.</code> | <code>This measurement was conducted with 10x 3' v3. Kidney collecting duct intercalated cell from a 43-year old European male with kidney cancer, taken from the cortex of kidney and cryopreserved for further analysis.</code> | <code>MALAT1 EEF1A1 CRYAB S100A6 ITM2B ACTB TPT1 PTMA FTL PEBP1 H3-3B GSTP1 ADIRF IGFBP7 S100A10 HIPK2 MYL6 SERF2 TPM1 FAU FTH1 ID4 EIF1 TMSB10 HSP90AA1 SKP1 IGFBP2 IGFBP5 PRDX1 MYL12B CYSTM1 CLU ATP5F1E AHNAK PPDPF DSTN ID1 COX7C JUND SRP14 ATP1B1 HINT1 NDUFA4 PPIA NACA TMA7 NEAT1 CD9 SYNE2 LAPTM4A GNAS CIRBP ATP5F1D DDX17 EDF1 CCND1 LDHB RTN4 TMEM59 NR4A1 KTN1 SAT1 TMBIM6 APP</code> | | <code>MALAT1 KCND2 NRXN1 CDH18 NRXN3 ZNF385D CADM2 RALYL NKAIN2 CADPS2 RIMS1 FSTL5 GRID2 TRPM3 CHN2 DPP6 JMJD1C RORA PDE1A UNC13C TIAM1 NRG1 SNAP25 ZFPM2 CALN1 LSAMP CNTN1 ABLIM1 SYNE1 ANK3 CA10 NFIA ZBTB20 NTM CADM1 OPCML RELN DNM3 NEBL ERC1 SCN2A PPP3CA CACNA1A GALNT13 LRRC4C GPM6A RABGAP1L RIT2 CAMK4 GRIA4 PTPRD RBFOX3 MCTP1 LHFPL6 PCLO MEG3 PDE10A NOVA1 RTN1 ZNF385B CNTN4 GABRB2 SPOCK1 OXR1</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male cerebellum, specifically from the Cerebellar Vermis - CBV region, with European self-reported ethnicity, analyzed at the nucleus level.</code> | <code>This measurement was conducted with 10x 3' v3. Endothelial cells derived from the cerebellum (specifically, cerebellar vermis) of a 42-year-old male, classified under the vascular supercluster term.</code> | <code>MALAT1 ATP10A COBLL1 GPCPD1 PTPRG SLC39A10 FLT1 FLI1 TSPAN5 THSD4 RUNDC3B CCNY IGFBP7 ST6GALNAC3 PRKCH ST6GAL1 MECOM ESYT2 TBC1D4 IGF1R TACC1 HERC4 CDH2 TCF4 ABCB1 DOCK9 SORBS2 USP54 CBFA2T2 TSC22D1 QKI EPAS1 APP NFIB AOPEP ELMO1 ZNF704 PTPRM NET1 A2M FGD6 EPHA3 NEBL RAPGEF2 ACVR1 SPTBN1 BBS9 KLF2 MKLN1 EXOC6 LEF1 PPP3CA RBMS3 LRMDA WDFY3 BCL2L1 TTC3 SIPA1L1 CFLAR ADGRF5 MAP4K4 SCARB1 RAPGEF4 ABLIM1</code> | | <code>EEF1A1 ACTB GAPDH HMGN2 PTMA SERF2 TMSB4X CD74 PABPC1 FTH1 TMSB10 FAU PFN1 HMGN1 OAZ1 HMGB1 TPT1 PPIA NACA BTF3 MALAT1 MYL6 ATP5MG CFL1 RACK1 ODC1 ATP5F1E TMA7 SLC25A5 ELOB ARPC3 NPM1 COX7C ANP32B C4orf3 EIF1 PCBP2 KLF6 LAPTM5 COX8A RHOA HSPA8 H3-3B PTP4A2 UBA52 OST4 CIRBP LGALS1 EIF3L STMN1 PPDPF COX4I1 RAN EIF3F PPP1CC COMMD6 NDUFA4 YBX1 PEBP1 COTL1 COX7A2 HSPE1 CCNI TRIR</code> | <code>This measurement was conducted with 10x 5' v1. Cell sample from the tonsil of a 9-year-old female with recurrent tonsillitis, characterized as a centroblast B cell with IGLC2, IGLV7-43, IGLJ3 immunoglobulin genes expressed.</code> | <code>This measurement was conducted with 10x 5' v1. Centroblast cells derived from a 3-year-old male human tonsil sample, with obstructive sleep apnea and recurrent tonsillitis, undergoing affinity maturation and differentiation into memory or plasma cells.</code> | <code>CD74 MALAT1 EEF1A1 ACTB TMSB4X LAPTM5 PTMA TPT1 TMSB10 CXCR4 FAU BTG1 TXNIP PABPC1 FTH1 NACA FTL IRF1 RBM3 CD83 CCNI SARAF BTF3 HNRNPA3 HLA-DRB5 UBA52 MEF2C CORO1A UBE2D3 ATP5F1E PDIA6 UBC GABARAP CFL1 CALR RACK1 HSPA5 EIF4B RHOA HNRNPC SRSF5 PFN1 HSPA8 CNOT2 IFT57 HNRNPA2B1 COX7C ITM2B SH3BGRL3 PNRC1 PDIA3 EEF2 UBB PARP14 SNX2 LAP3 SLC25A5 POU2F2 ADAM28 ZNF800 CYBA GDI2 STK17B EIF3I</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `learning_rate`: 0.05 - `num_train_epochs`: 4 - `warmup_ratio`: 0.1 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 0.05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | cellxgene pseudo bulk 100k multiplets natural language annotation loss | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2_cosine_accuracy | |:------:|:----:|:-------------:|:----------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------:| | 0.1577 | 100 | 5.7909 | 5.3928 | 0.5671 | | 0.3155 | 200 | 5.7198 | 5.9454 | 0.4392 | | 0.4732 | 300 | 5.9533 | 5.9454 | 0.4067 | | 0.6309 | 400 | 5.9509 | 5.9454 | 0.3111 | | 0.7886 | 500 | 5.9507 | 5.9454 | 0.3115 | | 0.9464 | 600 | 5.9507 | 5.9454 | 0.2933 | | 1.1041 | 700 | 5.9499 | 5.9454 | 0.3154 | | 1.2618 | 800 | 5.9507 | 5.9454 | 0.4052 | | 1.4196 | 900 | 5.9508 | 5.9454 | 0.4603 | | 1.5773 | 1000 | 5.9511 | 5.9455 | 0.4445 | | 1.7350 | 1100 | 5.9513 | 5.9455 | 0.4450 | | 1.8927 | 1200 | 5.9512 | 5.9455 | 0.4710 | | 2.0505 | 1300 | 5.9506 | 5.9455 | 0.4730 | | 2.2082 | 1400 | 5.9517 | 5.9455 | 0.4721 | | 2.3659 | 1500 | 5.9517 | 5.9455 | 0.4705 | | 2.5237 | 1600 | 5.9517 | 5.9455 | 0.4723 | | 2.6814 | 1700 | 5.9517 | 5.9455 | 0.4751 | | 2.8391 | 1800 | 5.9517 | 5.9455 | 0.4722 | | 2.9968 | 1900 | 5.9517 | 5.9455 | 0.4729 | | 3.1546 | 2000 | 5.9509 | 5.9455 | 0.4677 | | 3.3123 | 2100 | 5.9517 | 5.9455 | 0.4693 | | 3.4700 | 2200 | 5.9517 | 5.9455 | 0.4728 | | 3.6278 | 2300 | 5.9517 | 5.9455 | 0.4702 | | 3.7855 | 2400 | 5.9517 | 5.9455 | 0.4718 | | 3.9432 | 2500 | 5.9517 | 5.9455 | 0.4769 | ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 5.0.0 - Transformers: 4.55.0.dev0 - PyTorch: 2.5.1+cu121 - Accelerate: 1.9.0 - Datasets: 2.19.1 - Tokenizers: 0.21.4 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756112249
Ferdi3425
2025-08-25T08:58:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:57:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
albertuspekerti/whispertiny_fruit25syl_v7_2
albertuspekerti
2025-08-25T08:58:03Z
108
0
null
[ "tensorboard", "safetensors", "whisper", "generated_from_trainer", "base_model:albertuspekerti/whispertiny_fruit25syl_v3_2", "base_model:finetune:albertuspekerti/whispertiny_fruit25syl_v3_2", "license:apache-2.0", "region:us" ]
null
2025-08-12T02:47:49Z
--- license: apache-2.0 base_model: albertuspekerti/whispertiny_fruit25syl_v3_2 tags: - generated_from_trainer metrics: - wer model-index: - name: whispertiny_fruit25syl_v7_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whispertiny_fruit25syl_v7_2 This model is a fine-tuned version of [albertuspekerti/whispertiny_fruit25syl_v3_2](https://huggingface.co/albertuspekerti/whispertiny_fruit25syl_v3_2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0405 - Wer: 2.34 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 900000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:---:|:---:|:---:|:---:|:---:| | 0.0015 | 0.00 | 2000 | 0.1650 | 13.69 | | 0.0023 | 0.00 | 4000 | 0.4859 | 26.23 | | 0.0017 | 0.01 | 6000 | 0.3551 | 23.24 | | 0.0030 | 0.01 | 8000 | 0.1757 | 18.02 | | 0.0015 | 0.01 | 10000 | 0.2069 | 18.25 | | 0.0365 | 1.00 | 12000 | 0.7034 | 41.99 | | 0.0007 | 1.00 | 14000 | 0.2721 | 20.08 | | 0.0012 | 1.01 | 16000 | 0.1604 | 14.07 | | 0.0038 | 1.01 | 18000 | 0.5626 | 28.47 | | 0.0019 | 1.01 | 20000 | 0.2777 | 23.72 | | 0.0031 | 1.01 | 22000 | 0.2175 | 17.92 | | 0.0199 | 2.00 | 24000 | 0.2511 | 17.33 | | 0.0014 | 2.00 | 26000 | 0.1804 | 16.60 | | 0.0007 | 2.01 | 28000 | 0.1997 | 15.62 | | 0.0017 | 2.01 | 30000 | 0.1679 | 13.25 | | 0.0021 | 2.01 | 32000 | 0.6248 | 29.96 | | 0.0020 | 2.01 | 34000 | 0.2805 | 22.55 | | 0.0007 | 3.00 | 36000 | 0.1912 | 15.72 | | 0.0017 | 3.00 | 38000 | 0.6397 | 24.51 | | 0.0024 | 3.01 | 40000 | 0.1851 | 13.44 | | 0.0005 | 3.01 | 42000 | 0.2569 | 21.54 | | 0.0005 | 3.01 | 44000 | 0.5288 | 28.18 | | 0.0050 | 4.00 | 46000 | 0.2538 | 15.05 | | 0.0026 | 4.00 | 48000 | 0.0993 | 10.70 | | 0.0009 | 4.00 | 50000 | 0.5376 | 23.57 | | 0.0010 | 4.01 | 52000 | 0.4009 | 21.67 | | 0.0018 | 4.01 | 54000 | 0.2099 | 14.74 | | 0.0016 | 4.01 | 56000 | 0.1439 | 13.13 | | 0.0107 | 5.00 | 58000 | 0.0643 | 7.68 | | 0.0011 | 5.00 | 60000 | 0.1293 | 11.51 | | 0.0009 | 5.01 | 62000 | 0.0721 | 8.04 | | 0.0008 | 5.01 | 64000 | 0.3456 | 24.58 | | 0.0007 | 5.01 | 66000 | 0.1930 | 16.79 | | 0.0005 | 5.01 | 68000 | 0.1542 | 12.18 | | 0.0009 | 6.00 | 70000 | 0.1657 | 13.00 | | 0.0004 | 6.00 | 72000 | 0.1262 | 11.16 | | 0.0004 | 6.01 | 74000 | 0.2233 | 12.73 | | 0.0010 | 6.01 | 76000 | 0.1117 | 11.79 | | 0.0021 | 6.01 | 78000 | 0.3011 | 24.35 | | 0.0014 | 6.01 | 80000 | 0.1536 | 14.13 | | 0.0010 | 7.00 | 82000 | 0.0863 | 7.93 | | 0.0014 | 7.00 | 84000 | 0.2631 | 16.91 | | 0.0003 | 7.01 | 86000 | 0.1333 | 10.72 | | 0.0004 | 7.01 | 88000 | 0.1723 | 16.66 | | 0.0008 | 7.01 | 90000 | 0.2139 | 19.11 | | 0.0047 | 8.00 | 92000 | 0.0988 | 8.88 | | 0.0003 | 8.00 | 94000 | 0.0784 | 7.12 | | 0.0004 | 8.00 | 96000 | 0.2343 | 17.37 | | 0.0019 | 8.01 | 98000 | 0.2397 | 18.74 | | 0.0010 | 8.01 | 100000 | 0.1677 | 12.29 | | 0.0004 | 8.01 | 102000 | 0.1551 | 14.36 | | 0.0013 | 9.00 | 104000 | 0.1314 | 11.37 | | 0.0003 | 9.00 | 106000 | 0.1554 | 9.61 | | 0.0004 | 9.01 | 108000 | 0.0906 | 9.04 | | 0.0001 | 9.01 | 110000 | 0.6560 | 34.02 | | 0.0009 | 9.01 | 112000 | 0.2301 | 17.58 | | 0.0007 | 9.01 | 114000 | 0.2159 | 14.63 | | 0.0007 | 10.00 | 116000 | 0.1608 | 10.86 | | 0.0005 | 10.00 | 118000 | 0.0831 | 8.62 | | 0.0005 | 10.01 | 120000 | 0.1421 | 9.19 | | 0.0004 | 10.01 | 122000 | 0.1187 | 10.68 | | 0.0003 | 10.01 | 124000 | 0.4213 | 25.16 | | 0.0006 | 10.01 | 126000 | 0.2728 | 16.96 | | 0.0002 | 11.00 | 128000 | 0.0876 | 9.04 | | 0.0008 | 11.00 | 130000 | 0.1947 | 16.94 | | 0.0005 | 11.01 | 132000 | 0.0990 | 8.75 | | 0.0008 | 11.01 | 134000 | 0.1164 | 8.94 | | 0.0004 | 11.01 | 136000 | 0.1203 | 12.85 | | 0.0019 | 12.00 | 138000 | 0.0438 | 4.48 | | 0.0003 | 12.00 | 140000 | 0.1088 | 8.65 | | 0.0004 | 12.00 | 142000 | 0.1215 | 9.92 | | 0.0015 | 12.01 | 144000 | 0.2885 | 21.79 | | 0.0014 | 12.01 | 146000 | 0.1768 | 12.10 | | 0.0004 | 12.01 | 148000 | 0.1216 | 10.13 | | 0.0013 | 13.00 | 150000 | 0.1339 | 10.36 | | 0.0017 | 13.00 | 152000 | 0.1112 | 8.96 | | 0.0001 | 13.01 | 154000 | 0.0948 | 7.98 | | 0.0002 | 13.01 | 156000 | 0.3108 | 20.68 | | 0.0008 | 13.01 | 158000 | 0.1587 | 15.30 | | 0.0015 | 13.01 | 160000 | 0.1346 | 10.93 | | 0.0005 | 14.00 | 162000 | 0.1653 | 13.21 | | 0.0005 | 14.00 | 164000 | 0.1019 | 11.03 | | 0.0006 | 14.01 | 166000 | 0.1058 | 8.35 | | 0.0002 | 14.01 | 168000 | 0.1135 | 10.51 | | 0.0002 | 14.01 | 170000 | 0.2589 | 21.16 | | 0.0010 | 15.00 | 172000 | 0.0872 | 7.39 | | 0.0002 | 15.00 | 174000 | 0.0600 | 6.66 | | 0.0007 | 15.00 | 176000 | 0.4865 | 31.15 | | 0.0011 | 15.01 | 178000 | 0.2016 | 15.32 | | 0.0005 | 15.01 | 180000 | 0.1639 | 10.70 | | 0.0006 | 15.01 | 182000 | 0.1186 | 12.50 | | 0.0006 | 16.00 | 184000 | 0.1166 | 9.92 | | 0.0005 | 16.00 | 186000 | 0.1155 | 7.33 | | 0.0004 | 16.01 | 188000 | 0.0656 | 6.72 | | 0.0008 | 16.01 | 190000 | 0.2959 | 17.06 | | 0.0002 | 16.01 | 192000 | 0.1560 | 12.60 | | 0.0005 | 16.01 | 194000 | 0.2069 | 12.79 | | 0.0015 | 17.00 | 196000 | 0.1045 | 8.83 | | 0.0002 | 17.00 | 198000 | 0.1018 | 8.73 | | 0.0003 | 17.01 | 200000 | 0.1292 | 7.20 | | 0.0009 | 17.01 | 202000 | 0.0931 | 9.25 | | 0.0019 | 17.01 | 204000 | 0.1964 | 17.42 | | 0.0013 | 17.01 | 206000 | 0.0973 | 7.10 | | 0.0007 | 18.00 | 208000 | 0.0941 | 7.79 | | 0.0003 | 18.00 | 210000 | 0.1350 | 11.12 | | 0.0001 | 18.01 | 212000 | 0.1246 | 8.33 | | 0.0002 | 18.01 | 214000 | 0.1008 | 10.11 | | 0.0001 | 18.01 | 216000 | 0.1457 | 12.60 | | 0.0013 | 19.00 | 218000 | 0.0435 | 4.33 | | 0.0002 | 19.00 | 220000 | 0.0605 | 5.19 | | 0.0003 | 19.00 | 222000 | 0.2734 | 18.36 | | 0.0003 | 19.01 | 224000 | 0.2369 | 15.24 | | 0.0001 | 19.01 | 226000 | 0.0959 | 6.91 | | 0.0003 | 19.01 | 228000 | 0.0936 | 7.28 | | 0.0008 | 20.00 | 230000 | 0.0783 | 6.45 | | 0.0002 | 20.00 | 232000 | 0.1215 | 9.19 | | 0.0002 | 20.01 | 234000 | 0.0851 | 8.71 | | 0.0001 | 20.01 | 236000 | 0.3519 | 22.84 | | 0.0003 | 20.01 | 238000 | 0.1444 | 12.20 | | 0.0005 | 20.01 | 240000 | 0.1581 | 9.67 | | 0.0003 | 21.00 | 242000 | 0.1343 | 9.57 | | 0.0003 | 21.00 | 244000 | 0.1086 | 7.72 | | 0.0002 | 21.01 | 246000 | 0.1358 | 7.54 | | 0.0002 | 21.01 | 248000 | 0.0717 | 6.30 | | 0.0004 | 21.01 | 250000 | 0.1298 | 10.74 | | 0.0001 | 21.01 | 252000 | 0.1443 | 9.32 | | 0.0003 | 22.00 | 254000 | 0.0451 | 4.10 | | 0.0002 | 22.00 | 256000 | 0.1284 | 10.82 | | 0.0001 | 22.01 | 258000 | 0.1014 | 7.26 | | 0.0005 | 22.01 | 260000 | 0.1175 | 7.58 | | 0.0002 | 22.01 | 262000 | 0.0875 | 7.64 | | 0.0006 | 23.00 | 264000 | 0.0402 | 3.81 | | 0.0001 | 23.00 | 266000 | 0.0462 | 5.05 | | 0.0002 | 23.00 | 268000 | 0.0650 | 7.98 | | 0.0007 | 23.01 | 270000 | 0.1429 | 12.75 | | 0.0002 | 23.01 | 272000 | 0.0977 | 7.75 | | 0.0001 | 23.01 | 274000 | 0.0982 | 8.52 | | 0.0005 | 24.00 | 276000 | 0.0998 | 7.05 | | 0.0002 | 24.00 | 278000 | 0.1020 | 7.75 | | 0.0001 | 24.01 | 280000 | 0.0735 | 6.64 | | 0.0002 | 24.01 | 282000 | 0.3529 | 19.78 | | 0.0003 | 24.01 | 284000 | 0.1658 | 14.15 | | 0.0001 | 24.01 | 286000 | 0.1560 | 11.45 | | 0.0002 | 25.00 | 288000 | 0.1662 | 10.49 | | 0.0004 | 25.00 | 290000 | 0.1091 | 10.30 | | 0.0001 | 25.01 | 292000 | 0.1403 | 9.94 | | 0.0002 | 25.01 | 294000 | 0.1119 | 8.92 | | 0.0000 | 25.01 | 296000 | 0.3880 | 22.00 | | 0.0002 | 26.00 | 298000 | 0.0605 | 4.67 | | 0.0000 | 26.00 | 300000 | 0.0621 | 4.92 | | 0.0003 | 26.00 | 302000 | 0.2317 | 13.61 | | 0.0002 | 26.01 | 304000 | 0.0863 | 6.93 | | 0.0005 | 26.01 | 306000 | 0.0940 | 6.74 | | 0.0006 | 26.01 | 308000 | 0.0879 | 8.10 | | 0.0001 | 27.00 | 310000 | 0.0515 | 4.14 | | 0.0001 | 27.00 | 312000 | 0.0680 | 4.42 | | 0.0000 | 27.01 | 314000 | 0.0987 | 8.14 | | 0.0005 | 27.01 | 316000 | 0.3038 | 16.45 | | 0.0000 | 27.01 | 318000 | 0.0865 | 6.36 | | 0.0003 | 27.01 | 320000 | 0.1186 | 7.60 | | 0.0004 | 28.00 | 322000 | 0.1314 | 8.14 | | 0.0000 | 28.00 | 324000 | 0.0978 | 6.28 | | 0.0001 | 28.01 | 326000 | 0.1021 | 7.26 | | 0.0007 | 28.01 | 328000 | 0.1285 | 10.45 | | 0.0006 | 28.01 | 330000 | 0.1283 | 10.91 | | 0.0003 | 28.01 | 332000 | 0.1309 | 9.92 | | 0.0002 | 29.00 | 334000 | 0.1114 | 9.09 | | 0.0006 | 29.00 | 336000 | 0.1049 | 9.48 | | 0.0000 | 29.01 | 338000 | 0.0879 | 7.08 | | 0.0001 | 29.01 | 340000 | 0.0644 | 5.57 | | 0.0004 | 29.01 | 342000 | 0.1470 | 10.53 | | 0.0003 | 30.00 | 344000 | 0.0425 | 3.39 | | 0.0000 | 30.00 | 346000 | 0.0358 | 3.22 | | 0.0002 | 30.00 | 348000 | 0.2155 | 13.50 | | 0.0002 | 30.01 | 350000 | 0.1227 | 10.49 | | 0.0001 | 30.01 | 352000 | 0.1400 | 7.77 | | 0.0033 | 30.01 | 354000 | 0.1205 | 10.40 | | 0.0001 | 31.00 | 356000 | 0.0440 | 3.39 | | 0.0002 | 31.00 | 358000 | 0.0825 | 5.44 | | 0.0002 | 31.01 | 360000 | 0.0743 | 7.77 | | 0.0004 | 31.01 | 362000 | 0.2200 | 15.57 | | 0.0002 | 31.01 | 364000 | 0.1102 | 8.39 | | 0.0001 | 31.01 | 366000 | 0.1132 | 7.81 | | 0.0003 | 32.00 | 368000 | 0.1195 | 8.92 | | 0.0001 | 32.00 | 370000 | 0.0605 | 4.67 | | 0.0000 | 32.01 | 372000 | 0.0545 | 4.31 | | 0.0003 | 32.01 | 374000 | 0.1234 | 10.55 | | 0.0001 | 32.01 | 376000 | 0.0810 | 8.04 | | 0.0001 | 32.01 | 378000 | 0.1075 | 7.14 | | 0.0004 | 33.00 | 380000 | 0.0766 | 6.05 | | 0.0005 | 33.00 | 382000 | 0.0983 | 8.42 | | 0.0000 | 33.01 | 384000 | 0.0772 | 5.69 | | 0.0002 | 33.01 | 386000 | 0.0823 | 6.89 | | 0.0004 | 33.01 | 388000 | 0.0938 | 8.33 | | 0.0001 | 34.00 | 390000 | 0.0531 | 3.75 | | 0.0003 | 34.00 | 392000 | 0.0452 | 3.43 | | 0.0004 | 34.00 | 394000 | 0.1294 | 11.22 | | 0.0004 | 34.01 | 396000 | 0.1213 | 10.17 | | 0.0000 | 34.01 | 398000 | 0.1238 | 8.77 | | 0.0004 | 34.01 | 400000 | 0.0922 | 6.09 | | 0.0003 | 35.00 | 402000 | 0.0613 | 4.73 | | 0.0000 | 35.00 | 404000 | 0.0533 | 3.18 | | 0.0001 | 35.01 | 406000 | 0.0726 | 6.26 | | 0.0002 | 35.01 | 408000 | 0.2262 | 13.33 | | 0.0002 | 35.01 | 410000 | 0.0819 | 7.35 | | 0.0000 | 35.01 | 412000 | 0.0978 | 6.85 | | 0.0001 | 36.00 | 414000 | 0.1319 | 8.42 | | 0.0001 | 36.00 | 416000 | 0.0543 | 4.31 | | 0.0002 | 36.01 | 418000 | 0.0757 | 5.57 | | 0.0001 | 36.01 | 420000 | 0.0819 | 7.62 | | 0.0001 | 36.01 | 422000 | 0.1564 | 10.95 | | 0.0001 | 37.00 | 424000 | 0.0912 | 6.49 | | 0.0003 | 37.00 | 426000 | 0.0702 | 5.32 | | 0.0004 | 37.00 | 428000 | 0.1477 | 9.02 | | 0.0000 | 37.01 | 430000 | 0.0772 | 6.18 | | 0.0001 | 37.01 | 432000 | 0.0775 | 6.47 | | 0.0002 | 37.01 | 434000 | 0.0546 | 5.00 | | 0.0000 | 38.00 | 436000 | 0.0444 | 3.27 | | 0.0001 | 38.00 | 438000 | 0.0380 | 2.85 | | 0.0005 | 38.01 | 440000 | 0.1071 | 8.73 | | 0.0003 | 38.01 | 442000 | 0.1291 | 10.03 | | 0.0000 | 38.01 | 444000 | 0.0772 | 6.18 | | 0.0001 | 38.01 | 446000 | 0.0799 | 6.28 | | 0.0001 | 39.00 | 448000 | 0.0480 | 3.56 | | 0.0000 | 57.01 | 658000 | 0.0630 | 3.75 | | 0.0001 | 57.01 | 660000 | 0.0610 | 3.73 | | 0.0000 | 57.01 | 662000 | 0.0430 | 2.72 | | 0.0006 | 57.01 | 664000 | 0.0494 | 2.87 | | 0.0000 | 58.00 | 666000 | 0.0523 | 2.95 | | 0.0003 | 58.00 | 668000 | 0.0455 | 2.78 | | 0.0001 | 58.01 | 670000 | 0.0379 | 2.43 | | 0.0000 | 58.01 | 672000 | 0.0588 | 3.64 | | 0.0000 | 58.01 | 674000 | 0.0365 | 2.34 | | 0.0000 | 58.01 | 676000 | 0.0395 | 2.60 | | 0.0000 | 59.00 | 678000 | 0.0662 | 3.77 | | 0.0000 | 59.00 | 680000 | 0.0376 | 2.34 | | 0.0000 | 59.01 | 682000 | 0.0406 | 2.34 | | 0.0003 | 59.01 | 684000 | 0.0385 | 2.22 | | 0.0001 | 59.01 | 686000 | 0.0551 | 3.18 | | 0.0000 | 60.00 | 688000 | 0.0409 | 2.72 | | 0.0001 | 60.00 | 690000 | 0.0397 | 2.32 | | 0.0001 | 60.00 | 692000 | 0.0471 | 3.31 | | 0.0001 | 60.01 | 694000 | 0.0348 | 2.16 | | 0.0000 | 60.01 | 696000 | 0.0338 | 2.22 | | 0.0000 | 60.01 | 698000 | 0.0358 | 2.30 | | 0.0000 | 61.00 | 700000 | 0.0376 | 2.24 | | 0.0000 | 61.00 | 702000 | 0.0386 | 2.41 | | 0.0000 | 61.01 | 704000 | 0.0429 | 2.60 | | 0.0002 | 61.01 | 706000 | 0.0675 | 3.94 | | 0.0000 | 61.01 | 708000 | 0.0381 | 2.47 | | 0.0000 | 61.01 | 710000 | 0.0419 | 2.72 | | 0.0001 | 62.00 | 712000 | 0.0607 | 3.54 | | 0.0000 | 62.00 | 714000 | 0.0379 | 2.22 | | 0.0000 | 62.01 | 716000 | 0.0412 | 2.60 | | 0.0008 | 62.01 | 718000 | 0.0753 | 4.00 | | 0.0001 | 62.01 | 720000 | 0.0420 | 2.45 | | 0.0000 | 63.00 | 722000 | 0.0385 | 2.30 | | 0.0000 | 63.00 | 724000 | 0.0563 | 2.99 | | 0.0000 | 63.00 | 726000 | 0.0358 | 2.18 | | 0.0000 | 63.01 | 728000 | 0.0337 | 2.14 | | 0.0001 | 63.01 | 730000 | 0.0351 | 2.26 | | 0.0000 | 63.01 | 732000 | 0.0408 | 2.60 | | 0.0000 | 64.00 | 734000 | 0.0339 | 2.05 | | 0.0001 | 64.00 | 736000 | 0.0373 | 2.14 | | 0.0000 | 64.01 | 738000 | 0.0566 | 3.37 | | 0.0000 | 64.01 | 740000 | 0.0374 | 2.41 | | 0.0000 | 64.01 | 742000 | 0.0350 | 2.20 | | 0.0000 | 64.01 | 744000 | 0.0354 | 2.24 | | 0.0000 | 65.00 | 746000 | 0.0341 | 2.16 | | 0.0000 | 65.00 | 748000 | 0.0366 | 2.37 | | 0.0001 | 65.01 | 750000 | 0.0459 | 2.57 | | 0.0001 | 65.01 | 752000 | 0.0494 | 2.76 | | 0.0000 | 65.01 | 754000 | 0.0333 | 1.99 | | 0.0000 | 65.01 | 756000 | 0.0345 | 1.99 | | 0.0000 | 66.00 | 758000 | 0.0401 | 2.32 | | 0.0001 | 66.00 | 760000 | 0.0315 | 1.82 | | 0.0000 | 66.01 | 762000 | 0.0365 | 1.90 | | 0.0000 | 66.01 | 764000 | 0.0446 | 2.55 | | 0.0000 | 66.01 | 766000 | 0.0370 | 2.11 | | 0.0000 | 67.00 | 768000 | 0.0322 | 1.90 | | 0.0000 | 67.00 | 770000 | 0.0394 | 2.18 | | 0.0001 | 67.00 | 772000 | 0.0437 | 2.60 | | 0.0000 | 67.01 | 774000 | 0.0334 | 1.95 | | 0.0000 | 67.01 | 776000 | 0.0363 | 2.14 | | 0.0000 | 67.01 | 778000 | 0.0368 | 2.16 | | 0.0000 | 68.00 | 780000 | 0.0315 | 1.86 | | 0.0000 | 68.00 | 782000 | 0.0409 | 2.28 | | 0.0001 | 68.01 | 784000 | 0.0441 | 2.53 | | 0.0000 | 68.01 | 786000 | 0.0380 | 2.26 | | 0.0000 | 68.01 | 788000 | 0.0384 | 2.20 | | 0.0000 | 68.01 | 790000 | 0.0372 | 2.18 | | 0.0000 | 69.00 | 792000 | 0.0374 | 2.26 | | 0.0000 | 69.00 | 794000 | 0.0357 | 2.20 | | 0.0000 | 69.01 | 796000 | 0.0415 | 2.47 | | 0.0000 | 69.01 | 798000 | 0.0439 | 2.60 | | 0.0000 | 69.01 | 800000 | 0.0411 | 2.24 | | 0.0002 | 69.01 | 802000 | 0.0416 | 2.32 | | 0.0000 | 70.00 | 804000 | 0.0395 | 2.30 | | 0.0000 | 70.00 | 806000 | 0.0352 | 2.09 | | 0.0001 | 70.01 | 808000 | 0.0353 | 2.07 | | 0.0000 | 70.01 | 810000 | 0.0387 | 2.03 | | 0.0000 | 70.01 | 812000 | 0.0387 | 2.07 | | 0.0000 | 71.00 | 814000 | 0.0370 | 2.14 | | 0.0000 | 71.00 | 816000 | 0.0400 | 2.22 | | 0.0001 | 71.00 | 818000 | 0.0458 | 2.64 | | 0.0000 | 71.01 | 820000 | 0.0376 | 2.09 | | 0.0000 | 71.01 | 822000 | 0.0386 | 2.18 | | 0.0000 | 71.01 | 824000 | 0.0385 | 2.16 | | 0.0000 | 72.00 | 826000 | 0.0369 | 2.14 | | 0.0000 | 72.00 | 828000 | 0.0405 | 2.18 | | 0.0000 | 72.01 | 830000 | 0.0474 | 2.57 | | 0.0000 | 72.01 | 832000 | 0.0484 | 2.68 | | 0.0000 | 72.01 | 834000 | 0.0445 | 2.53 | | 0.0000 | 72.01 | 836000 | 0.0444 | 2.51 | | 0.0000 | 73.00 | 838000 | 0.0447 | 2.55 | | 0.0000 | 73.00 | 840000 | 0.0411 | 2.45 | | 0.0000 | 73.01 | 842000 | 0.0413 | 2.49 | | 0.0000 | 73.01 | 844000 | 0.0430 | 2.43 | | 0.0000 | 73.01 | 846000 | 0.0409 | 2.37 | | 0.0000 | 74.00 | 848000 | 0.0399 | 2.39 | | 0.0000 | 74.00 | 850000 | 0.0425 | 2.47 | | 0.0000 | 74.00 | 852000 | 0.0390 | 2.24 | | 0.0000 | 74.01 | 854000 | 0.0392 | 2.28 | | 0.0000 | 74.01 | 856000 | 0.0410 | 2.30 | | 0.0000 | 74.01 | 858000 | 0.0409 | 2.30 | | 0.0000 | 75.00 | 860000 | 0.0393 | 2.26 | | 0.0000 | 75.00 | 862000 | 0.0429 | 2.47 | | 0.0000 | 75.01 | 864000 | 0.0426 | 2.43 | | 0.0000 | 75.01 | 866000 | 0.0421 | 2.45 | | 0.0000 | 75.01 | 868000 | 0.0432 | 2.47 | | 0.0000 | 75.01 | 870000 | 0.0425 | 2.45 | | 0.0000 | 76.00 | 872000 | 0.0423 | 2.45 | | 0.0000 | 76.00 | 874000 | 0.0423 | 2.43 | | 0.0000 | 76.01 | 876000 | 0.0423 | 2.45 | | 0.0000 | 76.01 | 878000 | 0.0423 | 2.41 | | 0.0000 | 76.01 | 880000 | 0.0422 | 2.41 | | 0.0000 | 76.01 | 882000 | 0.0422 | 2.37 | | 0.0000 | 77.00 | 884000 | 0.0415 | 2.37 | | 0.0000 | 77.00 | 886000 | 0.0405 | 2.32 | | 0.0000 | 77.01 | 888000 | 0.0405 | 2.32 | | 0.0000 | 77.01 | 890000 | 0.0405 | 2.32 | | 0.0000 | 77.01 | 892000 | 0.0406 | 2.32 | | 0.0000 | 78.00 | 894000 | 0.0406 | 2.34 | | 0.0000 | 78.00 | 896000 | 0.0405 | 2.32 | | 0.0000 | 78.00 | 898000 | 0.0405 | 2.34 | | 0.0000 | 78.01 | 900000 | 0.0405 | 2.34 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
matonski/SAEdiff_ftb-llama32_1B_instruct-ebma-L7-s2-t100-k48-lr1e-04-x6
matonski
2025-08-25T08:57:11Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-08-25T08:56:57Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
foronly09/blockassist-bc-diving_deft_chicken_1756110129
foronly09
2025-08-25T08:56:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "diving deft chicken", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:56:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - diving deft chicken --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
saibie1677/gpt-oss-20b-multilingual-reasoner
saibie1677
2025-08-25T08:56:05Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "dataset:HuggingFaceH4/Multilingual-Thinking", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "endpoints_compatible", "region:us" ]
null
2025-08-23T05:57:06Z
--- base_model: openai/gpt-oss-20b datasets: HuggingFaceH4/Multilingual-Thinking library_name: transformers model_name: gpt-oss-20b-multilingual-reasoner tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gpt-oss-20b-multilingual-reasoner This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on a translated dataset from [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset to Korean(한국어). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="saibie1677/gpt-oss-20b-multilingual-reasoner", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.4 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ChenWu98/numina_qwen_2.5_sft_identical_split_random_weighted_alpha3.0_1
ChenWu98
2025-08-25T08:55:17Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-1.5B", "base_model:finetune:Qwen/Qwen2.5-1.5B", "endpoints_compatible", "region:us" ]
null
2025-08-25T08:54:03Z
--- base_model: Qwen/Qwen2.5-1.5B library_name: transformers model_name: numina_qwen_2.5_sft_identical_split_random_weighted_alpha3.0_1 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for numina_qwen_2.5_sft_identical_split_random_weighted_alpha3.0_1 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/qohkms55) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756110902
Sayemahsjn
2025-08-25T08:54:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:54:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aleebaster/blockassist-bc-sly_eager_boar_1756110500
aleebaster
2025-08-25T08:54:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:54:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1756110496
quantumxnode
2025-08-25T08:53:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:53:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
dv5f/blockassist-bc-restless_poisonous_orangutan_1756111331
dv5f
2025-08-25T08:53:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "restless poisonous orangutan", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:53:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - restless poisonous orangutan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756111988
eusuf01
2025-08-25T08:53:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:53:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1756110396
hakimjustbao
2025-08-25T08:53:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:53:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
EmilRyd/gpt-oss-20b-aquarat-ground-truth-on-policy-3e5-stylized-100-50
EmilRyd
2025-08-25T08:51:58Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-25T08:46:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ricodr/blockassist-bc-twitchy_toothy_clam_1756111819
ricodr
2025-08-25T08:51:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "twitchy toothy clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:51:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - twitchy toothy clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thaymanhinhsamsung24h/tiem-thay-man-hinh-samsung-a73-gia-re
thaymanhinhsamsung24h
2025-08-25T08:51:24Z
0
0
null
[ "region:us" ]
null
2025-08-25T08:50:43Z
<h1><strong>Tiệm Thay M&agrave;n H&igrave;nh Samsung A73 5G Gi&aacute; Rẻ TPHCM &ndash; Dịch Vụ Chuy&ecirc;n Nghiệp Tại Bệnh Viện Điện Thoại, Laptop 24h</strong></h1> <p>Khi m&agrave;n h&igrave;nh Samsung A73 5G của bạn gặp sự cố, việc t&igrave;m kiếm một <a href="https://chamsocdidong.com/thay-man-hinh-samsung-galaxy-a73-ds16940" target="_blank">tiệm thay m&agrave;n h&igrave;nh Samsung A73 5G gi&aacute; rẻ TPHCM</a>&nbsp;l&agrave; điều cần thiết. Tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong>, ch&uacute;ng t&ocirc;i cung cấp dịch vụ thay m&agrave;n h&igrave;nh Samsung ch&iacute;nh h&atilde;ng với mức gi&aacute; hợp l&yacute;, bảo đảm chất lượng v&agrave; mang lại hiệu quả cao. H&atilde;y c&ugrave;ng t&igrave;m hiểu chi tiết về dịch vụ thay m&agrave;n h&igrave;nh Samsung tại Bệnh Viện Điện Thoại, Laptop 24h.</p> <p style="text-align: center;"><img src="https://chamsocdidong.com/upload_images/images/thay-man-hinh-samsung-a73-5g/thay-man-hinh-samsung-a73.jpg" alt="" /></p> <h3>Khi N&agrave;o Cần Thay M&agrave;n H&igrave;nh Samsung?</h3> <p>M&agrave;n h&igrave;nh l&agrave; một trong những bộ phận quan trọng nhất của chiếc điện thoại Samsung. Khi m&agrave;n h&igrave;nh của bạn bị hỏng, n&oacute; c&oacute; thể g&acirc;y ảnh hưởng lớn đến trải nghiệm sử dụng. Dưới đ&acirc;y l&agrave; những dấu hiệu cho thấy bạn cần phải đến <a href="https://issuu.com/thaymanhinhsamsung24h" target="_blank">cửa h&agrave;ng thay m&agrave;n h&igrave;nh Samsung gi&aacute; rẻ</a>&nbsp;để thay thế m&agrave;n h&igrave;nh:</p> <ol> <li> <p><strong>M&agrave;n h&igrave;nh bị vỡ hoặc nứt</strong>: Đ&acirc;y l&agrave; dấu hiệu r&otilde; r&agrave;ng nhất khi bạn cần phải thay m&agrave;n h&igrave;nh. Sau khi điện thoại bị rơi hoặc va đập mạnh, m&agrave;n h&igrave;nh c&oacute; thể bị vỡ hoặc nứt. Nếu t&igrave;nh trạng n&agrave;y xảy ra, việc thay m&agrave;n h&igrave;nh mới l&agrave; cần thiết để bảo vệ điện thoại v&agrave; tiếp tục sử dụng một c&aacute;ch an to&agrave;n.</p> </li> <li> <p><strong>M&agrave;n h&igrave;nh kh&ocirc;ng hiển thị hoặc hiển thị mờ</strong>: M&agrave;n h&igrave;nh Samsung của bạn kh&ocirc;ng hiển thị h&igrave;nh ảnh hoặc c&oacute; h&igrave;nh ảnh mờ, nh&ograve;e l&agrave; dấu hiệu r&otilde; rệt cho thấy m&agrave;n h&igrave;nh đ&atilde; bị hư hỏng. Thay m&agrave;n h&igrave;nh sẽ gi&uacute;p bạn phục hồi lại chất lượng hiển thị ban đầu.</p> </li> <li> <p><strong>Cảm ứng kh&ocirc;ng phản hồi</strong>: Nếu m&agrave;n h&igrave;nh kh&ocirc;ng nhận cảm ứng hoặc cảm ứng bị trễ, điều n&agrave;y cho thấy m&agrave;n h&igrave;nh của bạn đ&atilde; bị lỗi. <strong>Thay m&agrave;n h&igrave;nh Samsung</strong> l&agrave; giải ph&aacute;p tốt nhất để khắc phục t&igrave;nh trạng n&agrave;y.</p> </li> <li> <p><strong>M&agrave;n h&igrave;nh xuất hiện c&aacute;c vết loang mực hoặc vết đen</strong>: Những vết đen hoặc vết loang mực tr&ecirc;n m&agrave;n h&igrave;nh kh&ocirc;ng chỉ l&agrave;m giảm t&iacute;nh thẩm mỹ m&agrave; c&ograve;n ảnh hưởng đến khả năng sử dụng. Đ&acirc;y l&agrave; dấu hiệu cho thấy bạn cần thay m&agrave;n h&igrave;nh Samsung mới.</p> </li> <li> <p><strong>M&agrave;n h&igrave;nh sai m&agrave;u hoặc độ s&aacute;ng kh&ocirc;ng đều</strong>: Khi m&agrave;n h&igrave;nh của bạn hiển thị m&agrave;u sắc kh&ocirc;ng ch&iacute;nh x&aacute;c hoặc độ s&aacute;ng kh&ocirc;ng đồng đều, việc thay m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng sẽ gi&uacute;p bạn c&oacute; một trải nghiệm tốt hơn.</p> </li> </ol> <p>Nếu bạn gặp phải bất kỳ vấn đề n&agrave;o tr&ecirc;n, h&atilde;y đến ngay <strong>cửa h&agrave;ng thay m&agrave;n h&igrave;nh Samsung gi&aacute; rẻ</strong> tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> để được kiểm tra v&agrave; thay m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng.</p> <h3>Địa Chỉ Thay M&agrave;n H&igrave;nh Samsung Ch&iacute;nh H&atilde;ng Gi&aacute; Rẻ</h3> <p>Khi t&igrave;m kiếm một <strong>địa chỉ thay m&agrave;n h&igrave;nh Samsung ch&iacute;nh h&atilde;ng gi&aacute; rẻ</strong>, <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> l&agrave; lựa chọn đ&aacute;ng tin cậy. Ch&uacute;ng t&ocirc;i cam kết mang đến dịch vụ thay m&agrave;n h&igrave;nh Samsung với m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng, bảo đảm chất lượng v&agrave; t&iacute;nh năng vượt trội. Dưới đ&acirc;y l&agrave; l&yacute; do v&igrave; sao bạn n&ecirc;n chọn ch&uacute;ng t&ocirc;i:</p> <ul> <li> <p><strong>M&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng Samsung</strong>: Ch&uacute;ng t&ocirc;i chỉ sử dụng m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng từ Samsung, gi&uacute;p đảm bảo điện thoại của bạn hoạt động ổn định v&agrave; kh&ocirc;ng gặp phải c&aacute;c sự cố về m&agrave;n h&igrave;nh sau khi thay thế.</p> </li> <li> <p><strong>Gi&aacute; cả hợp l&yacute; v&agrave; minh bạch</strong>: Dịch vụ thay m&agrave;n h&igrave;nh Samsung tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> c&oacute; gi&aacute; cả hợp l&yacute;, ph&ugrave; hợp với nhu cầu của kh&aacute;ch h&agrave;ng. Ch&uacute;ng t&ocirc;i cam kết cung cấp mức gi&aacute; minh bạch, kh&ocirc;ng c&oacute; chi ph&iacute; ẩn.</p> </li> <li> <p><strong>Thời gian thay m&agrave;n h&igrave;nh nhanh ch&oacute;ng</strong>: Ch&uacute;ng t&ocirc;i hiểu rằng bạn cần sử dụng điện thoại ngay, v&igrave; vậy qu&aacute; tr&igrave;nh thay m&agrave;n h&igrave;nh sẽ được thực hiện trong thời gian nhanh nhất, thường chỉ trong khoảng 1-2 giờ.</p> </li> <li> <p><strong>Bảo h&agrave;nh d&agrave;i hạn</strong>: Sau khi thay m&agrave;n h&igrave;nh, bạn sẽ nhận được chế độ bảo h&agrave;nh d&agrave;i hạn, gi&uacute;p bạn y&ecirc;n t&acirc;m sử dụng điện thoại m&agrave; kh&ocirc;ng lo về chất lượng m&agrave;n h&igrave;nh.</p> </li> </ul> <p>Với dịch vụ chất lượng v&agrave; gi&aacute; cả hợp l&yacute;, <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> l&agrave; địa chỉ tin cậy để thay m&agrave;n h&igrave;nh Samsung ch&iacute;nh h&atilde;ng tại TPHCM.</p> <p style="text-align: center;"><img src="https://chamsocdidong.com/upload_images/images/thay-man-hinh-samsung-a73-5g/truoc-va-sau-khi-thay-man-hinh-samsung-A73(1).jpg" alt="" /></p> <h3>Thay M&agrave;n H&igrave;nh Samsung C&oacute; Ảnh Hưởng G&igrave; Đến M&aacute;y Kh&ocirc;ng?</h3> <p>Một trong những mối lo ngại của nhiều người khi thay m&agrave;n h&igrave;nh Samsung l&agrave; liệu việc thay thế c&oacute; ảnh hưởng đến c&aacute;c bộ phận kh&aacute;c trong m&aacute;y hay kh&ocirc;ng. Tuy nhi&ecirc;n, nếu bạn chọn dịch vụ thay m&agrave;n h&igrave;nh tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong>, bạn ho&agrave;n to&agrave;n c&oacute; thể y&ecirc;n t&acirc;m.</p> <p><strong>L&yacute; do tại sao thay m&agrave;n h&igrave;nh kh&ocirc;ng ảnh hưởng đến m&aacute;y</strong>:</p> <ol> <li> <p><strong>Sử dụng m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng</strong>: Ch&uacute;ng t&ocirc;i chỉ sử dụng m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng từ Samsung, đảm bảo t&iacute;nh tương th&iacute;ch với c&aacute;c linh kiện kh&aacute;c của điện thoại. Việc thay m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng gi&uacute;p m&aacute;y hoạt động ổn định m&agrave; kh&ocirc;ng g&acirc;y ảnh hưởng đến c&aacute;c bộ phận kh&aacute;c.</p> </li> <li> <p><strong>Đội ngũ kỹ thuật vi&ecirc;n chuy&ecirc;n nghiệp</strong>: C&aacute;c kỹ thuật vi&ecirc;n của <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> đều c&oacute; kinh nghiệm l&acirc;u năm trong việc thay thế m&agrave;n h&igrave;nh Samsung. Họ sẽ thực hiện qu&aacute; tr&igrave;nh thay m&agrave;n h&igrave;nh một c&aacute;ch cẩn thận v&agrave; ch&iacute;nh x&aacute;c, kh&ocirc;ng l&agrave;m ảnh hưởng đến c&aacute;c linh kiện kh&aacute;c trong điện thoại.</p> </li> <li> <p><strong>Kiểm tra kỹ lưỡng sau khi thay m&agrave;n h&igrave;nh</strong>: Sau khi thay m&agrave;n h&igrave;nh, ch&uacute;ng t&ocirc;i sẽ kiểm tra to&agrave;n bộ c&aacute;c t&iacute;nh năng của điện thoại như cảm ứng, hiển thị, độ s&aacute;ng để đảm bảo mọi thứ hoạt động b&igrave;nh thường.</p> </li> </ol> <p>Với những yếu tố tr&ecirc;n, bạn c&oacute; thể y&ecirc;n t&acirc;m rằng việc thay m&agrave;n h&igrave;nh Samsung tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> sẽ kh&ocirc;ng g&acirc;y ảnh hưởng đến m&aacute;y của bạn.</p> <h3>Bệnh Viện Điện Thoại, Laptop 24h &ndash; Sử Dụng M&agrave;n H&igrave;nh Ch&iacute;nh H&atilde;ng Để Thay Cho Kh&aacute;ch H&agrave;ng</h3> <p><strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> cam kết sử dụng <strong>m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng Samsung</strong> trong tất cả c&aacute;c dịch vụ thay m&agrave;n h&igrave;nh. Việc sử dụng m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng gi&uacute;p bảo vệ điện thoại của bạn v&agrave; đảm bảo mọi t&iacute;nh năng hoạt động như ban đầu.</p> <p><strong>C&aacute;c loại m&agrave;n h&igrave;nh ch&iacute;nh h&atilde;ng m&agrave; ch&uacute;ng t&ocirc;i sử dụng</strong>:</p> <ul> <li> <p><strong>M&agrave;n h&igrave;nh Super AMOLED</strong>: Đ&acirc;y l&agrave; c&ocirc;ng nghệ m&agrave;n h&igrave;nh cao cấp của Samsung, mang lại m&agrave;u sắc sống động, độ tương phản cao v&agrave; tiết kiệm năng lượng. M&agrave;n h&igrave;nh n&agrave;y được sử dụng trong c&aacute;c d&ograve;ng điện thoại cao cấp như Galaxy S, Note v&agrave; A series.</p> </li> <li> <p><strong>M&agrave;n h&igrave;nh AMOLED ti&ecirc;u chuẩn</strong>: M&agrave;n h&igrave;nh n&agrave;y th&iacute;ch hợp cho c&aacute;c d&ograve;ng điện thoại tầm trung, mang đến chất lượng hiển thị sắc n&eacute;t v&agrave; tiết kiệm năng lượng.</p> </li> <li> <p><strong>M&agrave;n h&igrave;nh LCD</strong>: M&agrave;n h&igrave;nh LCD ph&ugrave; hợp với c&aacute;c d&ograve;ng điện thoại gi&aacute; rẻ, mang lại độ s&aacute;ng cao v&agrave; hiển thị r&otilde; r&agrave;ng trong mọi điều kiện &aacute;nh s&aacute;ng.</p> </li> </ul> <p>Ch&uacute;ng t&ocirc;i cam kết mang đến cho kh&aacute;ch h&agrave;ng một dịch vụ thay m&agrave;n h&igrave;nh Samsung chất lượng, gi&uacute;p bạn y&ecirc;n t&acirc;m sử dụng điện thoại m&agrave; kh&ocirc;ng lo gặp phải sự cố về m&agrave;n h&igrave;nh.</p> <p style="text-align: center;"><img src="https://chamsocdidong.com/upload_images/images/thay-man-hinh-samsung-a73-5g/cam-ket-voi-khach-hang.jpg" alt="" /></p> <h3>Hướng Dẫn Sử Dụng Dịch Vụ Tại Bệnh Viện Điện Thoại, Laptop 24h</h3> <p>Để sử dụng dịch vụ thay m&agrave;n h&igrave;nh tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong>, bạn c&oacute; thể l&agrave;m theo c&aacute;c bước sau:</p> <ol> <li> <p><strong>Li&ecirc;n hệ với ch&uacute;ng t&ocirc;i</strong>: Gọi điện đến hotline hoặc truy cập website <strong>chamsocdidong.com</strong> để y&ecirc;u cầu tư vấn hoặc đặt lịch thay m&agrave;n h&igrave;nh.</p> </li> <li> <p><strong>Mang điện thoại đến cửa h&agrave;ng</strong>: Đến một trong c&aacute;c chi nh&aacute;nh của ch&uacute;ng t&ocirc;i để kỹ thuật vi&ecirc;n kiểm tra v&agrave; thay m&agrave;n h&igrave;nh cho bạn.</p> </li> <li> <p><strong>Thực hiện thay m&agrave;n h&igrave;nh</strong>: Qu&aacute; tr&igrave;nh thay m&agrave;n h&igrave;nh sẽ diễn ra nhanh ch&oacute;ng, chỉ trong khoảng 1-2 giờ đồng hồ.</p> </li> <li> <p><strong>Nhận bảo h&agrave;nh</strong>: Sau khi thay m&agrave;n h&igrave;nh, bạn sẽ nhận được phiếu bảo h&agrave;nh ch&iacute;nh h&atilde;ng, gi&uacute;p bạn y&ecirc;n t&acirc;m sử dụng điện thoại l&acirc;u d&agrave;i.</p> </li> </ol> <p>H&atilde;y đến <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> để trải nghiệm dịch vụ thay m&agrave;n h&igrave;nh Samsung ch&iacute;nh h&atilde;ng, nhanh ch&oacute;ng v&agrave; gi&aacute; rẻ. Ch&uacute;ng t&ocirc;i lu&ocirc;n sẵn s&agrave;ng phục vụ bạn!</p>
eusuf01/blockassist-bc-smooth_humming_butterfly_1756111856
eusuf01
2025-08-25T08:51:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:51:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ankitA2003/blockassist-bc-fishy_dappled_elephant_1756111828
ankitA2003
2025-08-25T08:51:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy dappled elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:51:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy dappled elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
eusuf01/blockassist-bc-smooth_humming_butterfly_1756111730
eusuf01
2025-08-25T08:49:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth humming butterfly", "arxiv:2504.07091", "region:us" ]
null
2025-08-25T08:49:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth humming butterfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/reij-s-styles-il-flux
Muapi
2025-08-25T08:48:57Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-25T08:48:37Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Reij's styles IL & FLUX ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: reijfrmd, white frame, ink ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:984240@1102503", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
badaoui/HuggingFaceTB-SmolLM2-135M-Instruct-neuron
badaoui
2025-08-25T08:48:03Z
20
0
null
[ "llama", "neuron", "optimized", "aws-neuron", "text-generation", "base_model:HuggingFaceTB/SmolLM2-135M-Instruct", "base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct", "region:us" ]
text-generation
2025-08-22T12:36:16Z
--- tags: - neuron - optimized - aws-neuron - text-generation base_model: HuggingFaceTB/SmolLM2-135M-Instruct --- # Neuron-Optimized HuggingFaceTB/SmolLM2-135M-Instruct This repository contains AWS Neuron-optimized files for [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct). ## Model Details - **Base Model**: [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) - **Task**: text-generation - **Optimization**: AWS Neuron compilation - **Generated by**: [badaoui](https://huggingface.co/badaoui) - **Generated using**: [Optimum Neuron Compiler Space](https://huggingface.co/spaces/optimum/neuron-export) ## Usage This model has been optimized for AWS Neuron devices (Inferentia/Trainium). To use it: ```python from optimum.neuron import NeuronModelForCausalLM model = NeuronModelForCausalLM.from_pretrained("badaoui/HuggingFaceTB-SmolLM2-135M-Instruct-neuron") ``` ## Performance These files are pre-compiled for AWS Neuron devices and should provide improved inference performance compared to the original model when deployed on Inferentia or Trainium instances. ## Original Model For the original model, training details, and more information, please visit: [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct)