Scheduled Commit
Browse files- data/2503.07906.json +1 -0
- data/2503.09949.json +1 -0
- data/2503.11509.json +1 -0
- data/2503.12689.json +1 -0
- data/2503.13356.json +1 -0
- data/2503.13358.json +1 -0
- data/2503.13834.json +1 -0
- data/2503.13891.json +1 -0
- data/2503.14201.json +1 -0
- data/2503.14237.json +1 -0
- data/2503.14487.json +1 -0
- data/2503.15242.json +1 -0
- data/2503.15451.json +1 -0
- data/2503.15558.json +1 -0
- data/2503.15567.json +1 -0
- data/2503.15672.json +1 -0
- data/2503.15855.json +1 -0
- data/2503.16031.json +1 -0
- data/2503.16055.json +1 -0
- data/2503.16057.json +1 -0
- data/2503.16188.json +1 -0
- data/2503.16194.json +1 -0
- data/2503.16212.json +1 -0
- data/2503.16219.json +1 -0
- data/2503.16252.json +1 -0
- data/2503.16278.json +1 -0
- data/2503.16302.json +1 -0
- data/2503.16322.json +1 -0
- data/2503.16356.json +1 -0
- data/2503.16365.json +1 -0
- data/2503.16375.json +1 -0
- data/2503.16397.json +1 -0
- data/2503.16413.json +1 -0
- data/2503.16416.json +1 -0
- data/2503.16418.json +1 -0
- data/2503.16420.json +1 -0
- data/2503.16421.json +1 -0
- data/2503.16422.json +1 -0
- data/2503.16425.json +1 -0
- data/2503.16428.json +1 -0
- data/2503.16429.json +1 -0
data/2503.07906.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.07906", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [CapArena: Benchmarking and Analyzing Detailed Image Captioning in the LLM Era](https://huggingface.co/papers/2503.12329) (2025)\n* [Image Captioning Evaluation in the Age of Multimodal LLMs: Challenges and Future Perspectives](https://huggingface.co/papers/2503.14604) (2025)\n* [Visual Attention Never Fades: Selective Progressive Attention ReCalibration for Detailed Image Captioning in Multimodal Large Language Models](https://huggingface.co/papers/2502.01419) (2025)\n* [Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct Preference Optimization](https://huggingface.co/papers/2502.13146) (2025)\n* [Image Embedding Sampling Method for Diverse Captioning](https://huggingface.co/papers/2502.10118) (2025)\n* [Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption](https://huggingface.co/papers/2503.09279) (2025)\n* [Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images](https://huggingface.co/papers/2502.13928) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.09949.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.09949", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [GRADEO: Towards Human-Like Evaluation for Text-to-Video Generation via Multi-Step Reasoning](https://huggingface.co/papers/2503.02341) (2025)\n* [MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation](https://huggingface.co/papers/2502.01719) (2025)\n* [Generative Frame Sampler for Long Video Understanding](https://huggingface.co/papers/2503.09146) (2025)\n* [Impossible Videos](https://huggingface.co/papers/2503.14378) (2025)\n* [VidCapBench: A Comprehensive Benchmark of Video Captioning for Controllable Text-to-Video Generation](https://huggingface.co/papers/2502.12782) (2025)\n* [FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion Understanding](https://huggingface.co/papers/2503.14935) (2025)\n* [AIGVE-Tool: AI-Generated Video Evaluation Toolkit with Multifaceted Benchmark](https://huggingface.co/papers/2503.14064) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.11509.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.11509", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think](https://huggingface.co/papers/2502.20172) (2025)\n* [Image Embedding Sampling Method for Diverse Captioning](https://huggingface.co/papers/2502.10118) (2025)\n* [Leveraging Large Language Models For Scalable Vector Graphics Processing: A Review](https://huggingface.co/papers/2503.04983) (2025)\n* [Pretrained Image-Text Models are Secretly Video Captioners](https://huggingface.co/papers/2502.13363) (2025)\n* [Fine-Grained Video Captioning through Scene Graph Consolidation](https://huggingface.co/papers/2502.16427) (2025)\n* [Semantic-Clipping: Efficient Vision-Language Modeling with Semantic-Guidedd Visual Selection](https://huggingface.co/papers/2503.11794) (2025)\n* [REAL: Realism Evaluation of Text-to-Image Generation Models for Effective Data Augmentation](https://huggingface.co/papers/2502.10663) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.12689.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.12689", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [HuViDPO:Enhancing Video Generation through Direct Preference Optimization for Human-Centric Alignment](https://huggingface.co/papers/2502.01690) (2025)\n* [IPO: Iterative Preference Optimization for Text-to-Video Generation](https://huggingface.co/papers/2502.02088) (2025)\n* [CustomVideoX: 3D Reference Attention Driven Dynamic Adaptation for Zero-Shot Customized Video Diffusion Transformers](https://huggingface.co/papers/2502.06527) (2025)\n* [Concat-ID: Towards Universal Identity-Preserving Video Synthesis](https://huggingface.co/papers/2503.14151) (2025)\n* [EchoVideo: Identity-Preserving Human Video Generation by Multimodal Feature Fusion](https://huggingface.co/papers/2501.13452) (2025)\n* [Dynamic Concepts Personalization from Single Videos](https://huggingface.co/papers/2502.14844) (2025)\n* [DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability](https://huggingface.co/papers/2503.06505) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.13356.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.13356", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [VLMs Play StarCraft II: A Benchmark and Multimodal Decision Method](https://huggingface.co/papers/2503.05383) (2025)\n* [The Evolving Landscape of LLM- and VLM-Integrated Reinforcement Learning](https://huggingface.co/papers/2502.15214) (2025)\n* [PCGRLLM: Large Language Model-Driven Reward Design for Procedural Content Generation Reinforcement Learning](https://huggingface.co/papers/2502.10906) (2025)\n* [Large Language Models for Multi-Robot Systems: A Survey](https://huggingface.co/papers/2502.03814) (2025)\n* [AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO](https://huggingface.co/papers/2502.14669) (2025)\n* [CombatVLA: An Efficient Vision-Language-Action Model for Combat Tasks in 3D Action Role-Playing Games](https://huggingface.co/papers/2503.09527) (2025)\n* [Static Vs. Agentic Game Master AI for Facilitating Solo Role-Playing Experiences](https://huggingface.co/papers/2502.19519) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.13358.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.13358", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Boosting Diffusion-Based Text Image Super-Resolution Model Towards Generalized Real-World Scenarios](https://huggingface.co/papers/2503.07232) (2025)\n* [CTSR: Controllable Fidelity-Realness Trade-off Distillation for Real-World Image Super Resolution](https://huggingface.co/papers/2503.14272) (2025)\n* [One-Step Diffusion Model for Image Motion-Deblurring](https://huggingface.co/papers/2503.06537) (2025)\n* [AdaptSR: Low-Rank Adaptation for Efficient and Scalable Real-World Super-Resolution](https://huggingface.co/papers/2503.07748) (2025)\n* [Adding Additional Control to One-Step Diffusion with Joint Distribution Matching](https://huggingface.co/papers/2503.06652) (2025)\n* [One Diffusion Step to Real-World Super-Resolution via Flow Trajectory Distillation](https://huggingface.co/papers/2502.01993) (2025)\n* [Reconciling Stochastic and Deterministic Strategies for Zero-shot Image Restoration using Diffusion Model in Dual](https://huggingface.co/papers/2503.01288) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.13834.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.13834", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [PARIC: Probabilistic Attention Regularization for Language Guided Image Classification from Pre-trained Vison Language Models](https://huggingface.co/papers/2503.11360) (2025)\n* [Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct Preference Optimization](https://huggingface.co/papers/2502.13146) (2025)\n* [MMRL: Multi-Modal Representation Learning for Vision-Language Models](https://huggingface.co/papers/2503.08497) (2025)\n* [Advancing Multimodal In-Context Learning in Large Vision-Language Models with Task-aware Demonstrations](https://huggingface.co/papers/2503.04839) (2025)\n* [AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding](https://huggingface.co/papers/2502.01341) (2025)\n* [Treble Counterfactual VLMs: A Causal Approach to Hallucination](https://huggingface.co/papers/2503.06169) (2025)\n* [Visual Attention Never Fades: Selective Progressive Attention ReCalibration for Detailed Image Captioning in Multimodal Large Language Models](https://huggingface.co/papers/2502.01419) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.13891.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.13891", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Semantic-Clipping: Efficient Vision-Language Modeling with Semantic-Guidedd Visual Selection](https://huggingface.co/papers/2503.11794) (2025)\n* [Exploring Advanced Techniques for Visual Question Answering: A Comprehensive Comparison](https://huggingface.co/papers/2502.14827) (2025)\n* [Marten: Visual Question Answering with Mask Generation for Multi-modal Document Understanding](https://huggingface.co/papers/2503.14140) (2025)\n* [Elevating Visual Question Answering through Implicitly Learned Reasoning Pathways in LVLMs](https://huggingface.co/papers/2503.14674) (2025)\n* [BREEN: Bridge Data-Efficient Encoder-Free Multimodal Learning with Learnable Queries](https://huggingface.co/papers/2503.12446) (2025)\n* [Lifting the Veil on Visual Information Flow in MLLMs: Unlocking Pathways to Faster Inference](https://huggingface.co/papers/2503.13108) (2025)\n* [LVPruning: An Effective yet Simple Language-Guided Vision Token Pruning Approach for Multi-modal Large Language Models](https://huggingface.co/papers/2501.13652) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.14201.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.14201", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning](https://huggingface.co/papers/2501.17501) (2025)\n* [Quality In, Quality Out: Investigating Training Data's Role in AI Code Generation](https://huggingface.co/papers/2503.11402) (2025)\n* [UnitCoder: Scalable Iterative Code Synthesis with Unit Test Guidance](https://huggingface.co/papers/2502.11460) (2025)\n* [Enhancing Code Generation for Low-Resource Languages: No Silver Bullet](https://huggingface.co/papers/2501.19085) (2025)\n* [Optimizing Deep Learning Models to Address Class Imbalance in Code Comment Classification](https://huggingface.co/papers/2501.15854) (2025)\n* [Optimizing Datasets for Code Summarization: Is Code-Comment Coherence Enough?](https://huggingface.co/papers/2502.07611) (2025)\n* [LoRACode: LoRA Adapters for Code Embeddings](https://huggingface.co/papers/2503.05315) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.14237.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.14237", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [FCoT-VL:Advancing Text-oriented Large Vision-Language Models with Efficient Visual Token Compression](https://huggingface.co/papers/2502.18512) (2025)\n* [LLaVA-MLB: Mitigating and Leveraging Attention Bias for Training-Free Video LLMs](https://huggingface.co/papers/2503.11205) (2025)\n* [TimeLoc: A Unified End-to-End Framework for Precise Timestamp Localization in Long Videos](https://huggingface.co/papers/2503.06526) (2025)\n* [BIMBA: Selective-Scan Compression for Long-Range Video Question Answering](https://huggingface.co/papers/2503.09590) (2025)\n* [BREEN: Bridge Data-Efficient Encoder-Free Multimodal Learning with Learnable Queries](https://huggingface.co/papers/2503.12446) (2025)\n* [Robust Multimodal Learning via Cross-Modal Proxy Tokens](https://huggingface.co/papers/2501.17823) (2025)\n* [Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers](https://huggingface.co/papers/2503.11579) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.14487.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.14487", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DiT-Air: Revisiting the Efficiency of Diffusion Model Architecture Design in Text to Image Generation](https://huggingface.co/papers/2503.10618) (2025)\n* [Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities](https://huggingface.co/papers/2503.11905) (2025)\n* [FlexControl: Computation-Aware ControlNet with Differentiable Router for Text-to-Image Generation](https://huggingface.co/papers/2502.10451) (2025)\n* [Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening](https://huggingface.co/papers/2502.12146) (2025)\n* [RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers](https://huggingface.co/papers/2502.14377) (2025)\n* [OminiControl2: Efficient Conditioning for Diffusion Transformers](https://huggingface.co/papers/2503.08280) (2025)\n* [Underlying Semantic Diffusion for Effective and Efficient In-Context Learning](https://huggingface.co/papers/2503.04050) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.15242.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.15242", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DynaCode: A Dynamic Complexity-Aware Code Benchmark for Evaluating Large Language Models in Code Generation](https://huggingface.co/papers/2503.10452) (2025)\n* [CodeArena: A Collective Evaluation Platform for LLM Code Generation](https://huggingface.co/papers/2503.01295) (2025)\n* [LLM4EFFI: Leveraging Large Language Models to Enhance Code Efficiency and Correctness](https://huggingface.co/papers/2502.18489) (2025)\n* [Learning to Solve and Verify: A Self-Play Framework for Code and Test Generation](https://huggingface.co/papers/2502.14948) (2025)\n* [A Survey On Large Language Models For Code Generation](https://huggingface.co/papers/2503.01245) (2025)\n* [CodeIF: Benchmarking the Instruction-Following Capabilities of Large Language Models for Code Generation](https://huggingface.co/papers/2502.19166) (2025)\n* [Isolating Language-Coding from Problem-Solving: Benchmarking LLMs with PseudoEval](https://huggingface.co/papers/2502.19149) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.15451.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.15451", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and Editing](https://huggingface.co/papers/2503.13836) (2025)\n* [CASIM: Composite Aware Semantic Injection for Text to Motion Generation](https://huggingface.co/papers/2502.02063) (2025)\n* [MAG: Multi-Modal Aligned Autoregressive Co-Speech Gesture Generation without Vector Quantization](https://huggingface.co/papers/2503.14040) (2025)\n* [Unlocking Pretrained LLMs for Motion-Related Multimodal Generation: A Fine-Tuning Approach to Unify Diffusion and Next-Token Prediction](https://huggingface.co/papers/2503.06119) (2025)\n* [AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion](https://huggingface.co/papers/2503.07418) (2025)\n* [MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model](https://huggingface.co/papers/2501.19083) (2025)\n* [HiTVideo: Hierarchical Tokenizers for Enhancing Text-to-Video Generation with Autoregressive Large Language Models](https://huggingface.co/papers/2503.11513) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.15558.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.15558", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding](https://huggingface.co/papers/2501.16411) (2025)\n* [ST-Think: How Multimodal Large Language Models Reason About 4D Worlds from Ego-Centric Videos](https://huggingface.co/papers/2503.12542) (2025)\n* [Visual Agentic AI for Spatial Reasoning with a Dynamic API](https://huggingface.co/papers/2502.06787) (2025)\n* [EmbodiedVSR: Dynamic Scene Graph-Guided Chain-of-Thought Reasoning for Visual Spatial Tasks](https://huggingface.co/papers/2503.11089) (2025)\n* [Magma: A Foundation Model for Multimodal AI Agents](https://huggingface.co/papers/2502.13130) (2025)\n* [PhysVLM: Enabling Visual Language Models to Understand Robotic Physical Reachability](https://huggingface.co/papers/2503.08481) (2025)\n* [R1-Onevision: Advancing Generalized Multimodal Reasoning through Cross-Modal Formalization](https://huggingface.co/papers/2503.10615) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.15567.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.15567", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation](https://huggingface.co/papers/2502.12638) (2025)\n* [All-atom Diffusion Transformers: Unified generative modelling of molecules and materials](https://huggingface.co/papers/2503.03965) (2025)\n* [EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling](https://huggingface.co/papers/2502.09509) (2025)\n* [Representing 3D Shapes With 64 Latent Vectors for 3D Diffusion Models](https://huggingface.co/papers/2503.08737) (2025)\n* [Hyper3D: Efficient 3D Representation via Hybrid Triplane and Octree Feature for Enhanced 3D Shape Variational Auto-Encoders](https://huggingface.co/papers/2503.10403) (2025)\n* [Exploring Representation-Aligned Latent Space for Better Generation](https://huggingface.co/papers/2502.00359) (2025)\n* [UniGenX: Unified Generation of Sequence and Structure with Autoregressive Diffusion](https://huggingface.co/papers/2503.06687) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.15672.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.15672", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Semi-Supervised Vision-Centric 3D Occupancy World Model for Autonomous Driving](https://huggingface.co/papers/2502.07309) (2025)\n* [Occ-LLM: Enhancing Autonomous Driving with Occupancy-Based Large Language Models](https://huggingface.co/papers/2502.06419) (2025)\n* [InsightDrive: Insight Scene Representation for End-to-End Autonomous Driving](https://huggingface.co/papers/2503.13047) (2025)\n* [V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multi-Modal Large Language Models](https://huggingface.co/papers/2502.09980) (2025)\n* [The Role of World Models in Shaping Autonomous Driving: A Comprehensive Survey](https://huggingface.co/papers/2502.10498) (2025)\n* [HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and Generation](https://huggingface.co/papers/2501.14729) (2025)\n* [RendBEV: Semantic Novel View Synthesis for Self-Supervised Bird's Eye View Segmentation](https://huggingface.co/papers/2502.14792) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.15855.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.15855", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Reangle-A-Video: 4D Video Generation as Video-to-Video Translation](https://huggingface.co/papers/2503.09151) (2025)\n* [SteerX: Creating Any Camera-Free 3D and 4D Scenes with Geometric Steering](https://huggingface.co/papers/2503.12024) (2025)\n* [Generative Gaussian Splatting: Generating 3D Scenes with Video Diffusion Priors](https://huggingface.co/papers/2503.13272) (2025)\n* [GSV3D: Gaussian Splatting-based Geometric Distillation with Stable Video Diffusion for Single-Image 3D Object Generation](https://huggingface.co/papers/2503.06136) (2025)\n* [WonderVerse: Extendable 3D Scene Generation with Video Generative Models](https://huggingface.co/papers/2503.09160) (2025)\n* [SV4D 2.0: Enhancing Spatio-Temporal Consistency in Multi-View Video Diffusion for High-Quality 4D Generation](https://huggingface.co/papers/2503.16396) (2025)\n* [FlexWorld: Progressively Expanding 3D Scenes for Flexiable-View Synthesis](https://huggingface.co/papers/2503.13265) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16031.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16031", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Irony Detection, Reasoning and Understanding in Zero-shot Learning](https://huggingface.co/papers/2501.16884) (2025)\n* [How to Protect Yourself from 5G Radiation? Investigating LLM Responses to Implicit Misinformation](https://huggingface.co/papers/2503.09598) (2025)\n* [Worse than Zero-shot? A Fact-Checking Dataset for Evaluating the Robustness of RAG Against Misleading Retrievals](https://huggingface.co/papers/2502.16101) (2025)\n* [FanChuan: A Multilingual and Graph-Structured Benchmark For Parody Detection and Analysis](https://huggingface.co/papers/2502.16503) (2025)\n* [Evaluation of Hate Speech Detection Using Large Language Models and Geographical Contextualization](https://huggingface.co/papers/2502.19612) (2025)\n* [Reasoning About Persuasion: Can LLMs Enable Explainable Propaganda Detection?](https://huggingface.co/papers/2502.16550) (2025)\n* [Detecting Offensive Memes with Social Biases in Singapore Context Using Multimodal Large Language Models](https://huggingface.co/papers/2502.18101) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16055.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16055", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [AdaptSR: Low-Rank Adaptation for Efficient and Scalable Real-World Super-Resolution](https://huggingface.co/papers/2503.07748) (2025)\n* [DiTASK: Multi-Task Fine-Tuning with Diffeomorphic Transformations](https://huggingface.co/papers/2502.06029) (2025)\n* [Med-LEGO: Editing and Adapting toward Generalist Medical Image Diagnosis](https://huggingface.co/papers/2503.01164) (2025)\n* [Task-Specific Knowledge Distillation from the Vision Foundation Model for Enhanced Medical Image Segmentation](https://huggingface.co/papers/2503.06976) (2025)\n* [One Head Eight Arms: Block Matrix based Low Rank Adaptation for CLIP-based Few-Shot Learning](https://huggingface.co/papers/2501.16720) (2025)\n* [Fine Tuning without Catastrophic Forgetting via Selective Low Rank Adaptation](https://huggingface.co/papers/2501.15377) (2025)\n* [Quantum-Enhanced LLM Efficient Fine Tuning](https://huggingface.co/papers/2503.12790) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16057.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16057", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Autonomy-of-Experts Models](https://huggingface.co/papers/2501.13074) (2025)\n* [DiffMoE: Dynamic Token Selection for Scalable Diffusion Transformers](https://huggingface.co/papers/2503.14487) (2025)\n* [Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts](https://huggingface.co/papers/2503.05066) (2025)\n* [Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient](https://huggingface.co/papers/2502.05172) (2025)\n* [Accelerating MoE Model Inference with Expert Sharding](https://huggingface.co/papers/2503.08467) (2025)\n* [Mixture of Decoupled Message Passing Experts with Entropy Constraint for General Node Classification](https://huggingface.co/papers/2502.08083) (2025)\n* [DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs](https://huggingface.co/papers/2502.12455) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16188.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16188", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Visual-RFT: Visual Reinforcement Fine-Tuning](https://huggingface.co/papers/2503.01785) (2025)\n* [Reinforcement Learning Outperforms Supervised Fine-Tuning: A Case Study on Audio Question Answering](https://huggingface.co/papers/2503.11197) (2025)\n* [OThink-MR1: Stimulating multimodal generalized reasoning capabilities through dynamic reinforcement learning](https://huggingface.co/papers/2503.16081) (2025)\n* [Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models](https://huggingface.co/papers/2503.06749) (2025)\n* [MM-Eureka: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning](https://huggingface.co/papers/2503.07365) (2025)\n* [R1-Zero's\"Aha Moment\"in Visual Reasoning on a 2B Non-SFT Model](https://huggingface.co/papers/2503.05132) (2025)\n* [Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning](https://huggingface.co/papers/2503.09516) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16194.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16194", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation](https://huggingface.co/papers/2503.16430) (2025)\n* [V2Flow: Unifying Visual Tokenization and Large Language Model Vocabularies for Autoregressive Image Generation](https://huggingface.co/papers/2503.07493) (2025)\n* [Layton: Latent Consistency Tokenizer for 1024-pixel Image Reconstruction and Generation by 256 Tokens](https://huggingface.co/papers/2503.08377) (2025)\n* [NFIG: Autoregressive Image Generation with Next-Frequency Prediction](https://huggingface.co/papers/2503.07076) (2025)\n* [HiTVideo: Hierarchical Tokenizers for Enhancing Text-to-Video Generation with Autoregressive Large Language Models](https://huggingface.co/papers/2503.11513) (2025)\n* [Robust Latent Matters: Boosting Image Generation with Sampling Error Synthesis](https://huggingface.co/papers/2503.08354) (2025)\n* [FlexVAR: Flexible Visual Autoregressive Modeling without Residual Prediction](https://huggingface.co/papers/2502.20313) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16212.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16212", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Integrating Arithmetic Learning Improves Mathematical Reasoning in Smaller Models](https://huggingface.co/papers/2502.12855) (2025)\n* [Advancing Reasoning in Large Language Models: Promising Methods and Approaches](https://huggingface.co/papers/2502.03671) (2025)\n* [Leveraging Online Olympiad-Level Math Problems for LLMs Training and Contamination-Resistant Evaluation](https://huggingface.co/papers/2501.14275) (2025)\n* [MetaLadder: Ascending Mathematical Solution Quality via Analogical-Problem Reasoning Transfer](https://huggingface.co/papers/2503.14891) (2025)\n* [Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance](https://huggingface.co/papers/2502.08127) (2025)\n* [Agentic Reasoning: Reasoning LLMs with Tools for the Deep Research](https://huggingface.co/papers/2502.04644) (2025)\n* [MathFimer: Enhancing Mathematical Reasoning by Expanding Reasoning Steps through Fill-in-the-Middle Task](https://huggingface.co/papers/2502.11684) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16219.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16219", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models](https://huggingface.co/papers/2503.06749) (2025)\n* [Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning](https://huggingface.co/papers/2503.09516) (2025)\n* [Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models](https://huggingface.co/papers/2503.16419) (2025)\n* [Pensez: Less Data, Better Reasoning -- Rethinking French LLM](https://huggingface.co/papers/2503.13661) (2025)\n* [AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO](https://huggingface.co/papers/2502.14669) (2025)\n* [L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning](https://huggingface.co/papers/2503.04697) (2025)\n* [Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search](https://huggingface.co/papers/2502.02508) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16252.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16252", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models](https://huggingface.co/papers/2503.06749) (2025)\n* [Audio-Reasoner: Improving Reasoning Capability in Large Audio Language Models](https://huggingface.co/papers/2503.02318) (2025)\n* [Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning](https://huggingface.co/papers/2503.09516) (2025)\n* [MM-Eureka: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning](https://huggingface.co/papers/2503.07365) (2025)\n* [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://huggingface.co/papers/2501.12948) (2025)\n* [R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning](https://huggingface.co/papers/2503.05592) (2025)\n* [Chain-of-Thought Matters: Improving Long-Context Language Models with Reasoning Path Supervision](https://huggingface.co/papers/2502.20790) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16278.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16278", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [UniGenX: Unified Generation of Sequence and Structure with Autoregressive Diffusion](https://huggingface.co/papers/2503.06687) (2025)\n* [HiTVideo: Hierarchical Tokenizers for Enhancing Text-to-Video Generation with Autoregressive Large Language Models](https://huggingface.co/papers/2503.11513) (2025)\n* [Autoregressive Image Generation with Randomized Parallel Decoding](https://huggingface.co/papers/2503.10568) (2025)\n* [Beyond Atoms: Enhancing Molecular Pretrained Representations with 3D Space Modeling](https://huggingface.co/papers/2503.10489) (2025)\n* [RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models](https://huggingface.co/papers/2503.10406) (2025)\n* [Direction-Aware Diagonal Autoregressive Image Generation](https://huggingface.co/papers/2503.11129) (2025)\n* [OmniMamba: Efficient and Unified Multimodal Understanding and Generation via State Space Models](https://huggingface.co/papers/2503.08686) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16302.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16302", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Hyper3D: Efficient 3D Representation via Hybrid Triplane and Octree Feature for Enhanced 3D Shape Variational Auto-Encoders](https://huggingface.co/papers/2503.10403) (2025)\n* [Representing 3D Shapes With 64 Latent Vectors for 3D Diffusion Models](https://huggingface.co/papers/2503.08737) (2025)\n* [Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation](https://huggingface.co/papers/2501.12202) (2025)\n* [GSV3D: Gaussian Splatting-based Geometric Distillation with Stable Video Diffusion for Single-Image 3D Object Generation](https://huggingface.co/papers/2503.06136) (2025)\n* [Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation](https://huggingface.co/papers/2503.01370) (2025)\n* [Bolt3D: Generating 3D Scenes in Seconds](https://huggingface.co/papers/2503.14445) (2025)\n* [Learning Few-Step Diffusion Models by Trajectory Distribution Matching](https://huggingface.co/papers/2503.06674) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16322.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16322", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Augmented Conditioning Is Enough For Effective Training Image Generation](https://huggingface.co/papers/2502.04475) (2025)\n* [CascadeV: An Implementation of Wurstchen Architecture for Video Generation](https://huggingface.co/papers/2501.16612) (2025)\n* [Masked Autoencoders Are Effective Tokenizers for Diffusion Models](https://huggingface.co/papers/2502.03444) (2025)\n* [Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation](https://huggingface.co/papers/2502.05415) (2025)\n* [IMAGINE-E: Image Generation Intelligence Evaluation of State-of-the-art Text-to-Image Models](https://huggingface.co/papers/2501.13920) (2025)\n* [CAT Pruning: Cluster-Aware Token Pruning For Text-to-Image Diffusion Models](https://huggingface.co/papers/2502.00433) (2025)\n* [Efficient Transformer for High Resolution Image Motion Deblurring](https://huggingface.co/papers/2501.18403) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16356.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16356", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Revealing and Mitigating Over-Attention in Knowledge Editing](https://huggingface.co/papers/2502.14838) (2025)\n* [Knowledge Updating? No More Model Editing! Just Selective Contextual Reasoning](https://huggingface.co/papers/2503.05212) (2025)\n* [K-Edit: Language Model Editing with Contextual Knowledge Awareness](https://huggingface.co/papers/2502.10626) (2025)\n* [MindBridge: Scalable and Cross-Model Knowledge Editing via Memory-Augmented Modality](https://huggingface.co/papers/2503.02701) (2025)\n* [CoME: An Unlearning-based Approach to Conflict-free Model Editing](https://huggingface.co/papers/2502.15826) (2025)\n* [AnyEdit: Edit Any Knowledge Encoded in Language Models](https://huggingface.co/papers/2502.05628) (2025)\n* [Bring Your Own Knowledge: A Survey of Methods for LLM Knowledge Expansion](https://huggingface.co/papers/2502.12598) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16365.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16365", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [UP-VLA: A Unified Understanding and Prediction Model for Embodied Agent](https://huggingface.co/papers/2501.18867) (2025)\n* [Improving Vision-Language-Action Model with Online Reinforcement Learning](https://huggingface.co/papers/2501.16664) (2025)\n* [Diffusion Instruction Tuning](https://huggingface.co/papers/2502.06814) (2025)\n* [VLA-Cache: Towards Efficient Vision-Language-Action Model via Adaptive Token Caching in Robotic Manipulation](https://huggingface.co/papers/2502.02175) (2025)\n* [OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction](https://huggingface.co/papers/2503.03734) (2025)\n* [Mem2Ego: Empowering Vision-Language Models with Global-to-Ego Memory for Long-Horizon Embodied Navigation](https://huggingface.co/papers/2502.14254) (2025)\n* [EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents](https://huggingface.co/papers/2502.09560) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16375.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16375", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Bolt3D: Generating 3D Scenes in Seconds](https://huggingface.co/papers/2503.14445) (2025)\n* [GSV3D: Gaussian Splatting-based Geometric Distillation with Stable Video Diffusion for Single-Image 3D Object Generation](https://huggingface.co/papers/2503.06136) (2025)\n* [Generative Gaussian Splatting: Generating 3D Scenes with Video Diffusion Priors](https://huggingface.co/papers/2503.13272) (2025)\n* [FlexWorld: Progressively Expanding 3D Scenes for Flexiable-View Synthesis](https://huggingface.co/papers/2503.13265) (2025)\n* [Controllable 3D Outdoor Scene Generation via Scene Graphs](https://huggingface.co/papers/2503.07152) (2025)\n* [VideoRFSplat: Direct Scene-Level Text-to-3D Gaussian Splatting Generation with Flexible Pose and Multi-View Joint Modeling](https://huggingface.co/papers/2503.15855) (2025)\n* [GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control](https://huggingface.co/papers/2503.03751) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16397.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16397", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Learning Few-Step Diffusion Models by Trajectory Distribution Matching](https://huggingface.co/papers/2503.06674) (2025)\n* [Adding Additional Control to One-Step Diffusion with Joint Distribution Matching](https://huggingface.co/papers/2503.06652) (2025)\n* [SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation](https://huggingface.co/papers/2503.09641) (2025)\n* [Accelerate High-Quality Diffusion Models with Inner Loop Feedback](https://huggingface.co/papers/2501.13107) (2025)\n* [One-Step Residual Shifting Diffusion for Image Super-Resolution via Distillation](https://huggingface.co/papers/2503.13358) (2025)\n* [One-Step Diffusion Model for Image Motion-Deblurring](https://huggingface.co/papers/2503.06537) (2025)\n* [Denoising Score Distillation: From Noisy Diffusion Pretraining to One-Step High-Quality Generation](https://huggingface.co/papers/2503.07578) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16413.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16413", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Inst3D-LMM: Instance-Aware 3D Scene Understanding with Multi-modal Instruction Tuning](https://huggingface.co/papers/2503.00513) (2025)\n* [SplatTalk: 3D VQA with Gaussian Splatting](https://huggingface.co/papers/2503.06271) (2025)\n* [DynamicGSG: Dynamic 3D Gaussian Scene Graphs for Environment Adaptation](https://huggingface.co/papers/2502.15309) (2025)\n* [Dr. Splat: Directly Referring 3D Gaussian Splatting via Direct Language Embedding Registration](https://huggingface.co/papers/2502.16652) (2025)\n* [GaussianGraph: 3D Gaussian-based Scene Graph Generation for Open-world Scene Understanding](https://huggingface.co/papers/2503.04034) (2025)\n* [EgoSplat: Open-Vocabulary Egocentric Scene Understanding with Language Embedded 3D Gaussian Splatting](https://huggingface.co/papers/2503.11345) (2025)\n* [UniGS: Unified Language-Image-3D Pretraining with Gaussian Splatting](https://huggingface.co/papers/2502.17860) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16416.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16416", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ACEBench: Who Wins the Match Point in Tool Usage?](https://huggingface.co/papers/2501.12851) (2025)\n* [A Survey on the Optimization of Large Language Model-based Agents](https://huggingface.co/papers/2503.12434) (2025)\n* [MultiAgentBench: Evaluating the Collaboration and Competition of LLM agents](https://huggingface.co/papers/2503.01935) (2025)\n* [Collab-Overcooked: Benchmarking and Evaluating Large Language Models as Collaborative Agents](https://huggingface.co/papers/2502.20073) (2025)\n* [Agentic AI for Scientific Discovery: A Survey of Progress, Challenges, and Future Directions](https://huggingface.co/papers/2503.08979) (2025)\n* [Large Language Models for Multi-Robot Systems: A Survey](https://huggingface.co/papers/2502.03814) (2025)\n* [PlanGenLLMs: A Modern Survey of LLM Planning Capabilities](https://huggingface.co/papers/2502.11221) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16418.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16418", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [EchoVideo: Identity-Preserving Human Video Generation by Multimodal Feature Fusion](https://huggingface.co/papers/2501.13452) (2025)\n* [Personalize Anything for Free with Diffusion Transformer](https://huggingface.co/papers/2503.12590) (2025)\n* [DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability](https://huggingface.co/papers/2503.06505) (2025)\n* [DiT-Air: Revisiting the Efficiency of Diffusion Model Architecture Design in Text to Image Generation](https://huggingface.co/papers/2503.10618) (2025)\n* [CustomVideoX: 3D Reference Attention Driven Dynamic Adaptation for Zero-Shot Customized Video Diffusion Transformers](https://huggingface.co/papers/2502.06527) (2025)\n* [Conceptrol: Concept Control of Zero-shot Personalized Image Generation](https://huggingface.co/papers/2503.06568) (2025)\n* [Concat-ID: Towards Universal Identity-Preserving Video Synthesis](https://huggingface.co/papers/2503.14151) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16420.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16420", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [WonderVerse: Extendable 3D Scene Generation with Video Generative Models](https://huggingface.co/papers/2503.09160) (2025)\n* [Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images](https://huggingface.co/papers/2503.13439) (2025)\n* [Bolt3D: Generating 3D Scenes in Seconds](https://huggingface.co/papers/2503.14445) (2025)\n* [I2V3D: Controllable image-to-video generation with 3D guidance](https://huggingface.co/papers/2503.09733) (2025)\n* [Text-driven 3D Human Generation via Contrastive Preference Optimization](https://huggingface.co/papers/2502.08977) (2025)\n* [Enhancing Monocular 3D Scene Completion with Diffusion Model](https://huggingface.co/papers/2503.00726) (2025)\n* [GSV3D: Gaussian Splatting-based Geometric Distillation with Stable Video Diffusion for Single-Image 3D Object Generation](https://huggingface.co/papers/2503.06136) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16421.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16421", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [PoseTraj: Pose-Aware Trajectory Control in Video Diffusion](https://huggingface.co/papers/2503.16068) (2025)\n* [I2V3D: Controllable image-to-video generation with 3D guidance](https://huggingface.co/papers/2503.09733) (2025)\n* [VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video Generation](https://huggingface.co/papers/2502.07531) (2025)\n* [CameraCtrl II: Dynamic Scene Exploration via Camera-controlled Video Diffusion Models](https://huggingface.co/papers/2503.10592) (2025)\n* [MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation](https://huggingface.co/papers/2502.04299) (2025)\n* [TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models](https://huggingface.co/papers/2503.05638) (2025)\n* [C-Drag: Chain-of-Thought Driven Motion Controller for Video Generation](https://huggingface.co/papers/2502.19868) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16422.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16422", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Efficient Gaussian Splatting for Monocular Dynamic Scene Rendering via Sparse Time-Variant Attribute Modeling](https://huggingface.co/papers/2502.20378) (2025)\n* [D2GV: Deformable 2D Gaussian Splatting for Video Representation in 400FPS](https://huggingface.co/papers/2503.05600) (2025)\n* [Swift4D:Adaptive divide-and-conquer Gaussian Splatting for compact and efficient reconstruction of dynamic scene](https://huggingface.co/papers/2503.12307) (2025)\n* [S3R-GS: Streamlining the Pipeline for Large-Scale Street Scene Reconstruction](https://huggingface.co/papers/2503.08217) (2025)\n* [Light4GS: Lightweight Compact 4D Gaussian Splatting Generation via Context Model](https://huggingface.co/papers/2503.13948) (2025)\n* [ForestSplats: Deformable transient field for Gaussian Splatting in the Wild](https://huggingface.co/papers/2503.06179) (2025)\n* [7DGS: Unified Spatial-Temporal-Angular Gaussian Splatting](https://huggingface.co/papers/2503.07946) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16425.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16425", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [\"Principal Components\"Enable A New Language of Images](https://huggingface.co/papers/2503.08685) (2025)\n* [V2Flow: Unifying Visual Tokenization and Large Language Model Vocabularies for Autoregressive Image Generation](https://huggingface.co/papers/2503.07493) (2025)\n* [Robust Latent Matters: Boosting Image Generation with Sampling Error Synthesis](https://huggingface.co/papers/2503.08354) (2025)\n* [Layton: Latent Consistency Tokenizer for 1024-pixel Image Reconstruction and Generation by 256 Tokens](https://huggingface.co/papers/2503.08377) (2025)\n* [HiTVideo: Hierarchical Tokenizers for Enhancing Text-to-Video Generation with Autoregressive Large Language Models](https://huggingface.co/papers/2503.11513) (2025)\n* [Direction-Aware Diagonal Autoregressive Image Generation](https://huggingface.co/papers/2503.11129) (2025)\n* [Autoregressive Image Generation with Randomized Parallel Decoding](https://huggingface.co/papers/2503.10568) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16428.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16428", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference](https://huggingface.co/papers/2502.20766) (2025)\n* [Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention](https://huggingface.co/papers/2502.11089) (2025)\n* [Twilight: Adaptive Attention Sparsity with Hierarchical Top-p Pruning](https://huggingface.co/papers/2502.02770) (2025)\n* [APB: Accelerating Distributed Long-Context Inference by Passing Compressed Context Blocks across GPUs](https://huggingface.co/papers/2502.12085) (2025)\n* [Training-free and Adaptive Sparse Attention for Efficient Long Video Generation](https://huggingface.co/papers/2502.21079) (2025)\n* [SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference](https://huggingface.co/papers/2502.18137) (2025)\n* [PowerAttention: Exponentially Scaling of Receptive Fields for Effective Sparse Attention](https://huggingface.co/papers/2503.03588) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2503.16429.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2503.16429", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [CleverDistiller: Simple and Spatially Consistent Cross-modal Distillation](https://huggingface.co/papers/2503.09878) (2025)\n* [PSA-SSL: Pose and Size-aware Self-Supervised Learning on LiDAR Point Clouds](https://huggingface.co/papers/2503.13914) (2025)\n* [Multi-Scale Neighborhood Occupancy Masked Autoencoder for Self-Supervised Learning in LiDAR Point Clouds](https://huggingface.co/papers/2502.20316) (2025)\n* [DINOSTAR: Deep Iterative Neural Object Detector Self-Supervised Training for Roadside LiDAR Applications](https://huggingface.co/papers/2501.17076) (2025)\n* [Cheesemap: A High-Performance Point-Indexing Data Structure for Neighbor Search in LiDAR Data](https://huggingface.co/papers/2502.11602) (2025)\n* [PointSea: Point Cloud Completion via Self-structure Augmentation](https://huggingface.co/papers/2502.17053) (2025)\n* [Fully-Geometric Cross-Attention for Point Cloud Registration](https://huggingface.co/papers/2502.08285) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|