TUN3D: Towards Real-World Scene Understanding from Unposed Images Paper • 2509.21388 • Published Sep 23, 2025 • 14
nablaNABLA: Neighborhood Adaptive Block-Level Attention Paper • 2507.13546 • Published Jul 17, 2025 • 124
T-LoRA: Single Image Diffusion Model Customization Without Overfitting Paper • 2507.05964 • Published Jul 8, 2025 • 119
Confidence Is All You Need: Few-Shot RL Fine-Tuning of Language Models Paper • 2506.06395 • Published Jun 5, 2025 • 133
Heeding the Inner Voice: Aligning ControlNet Training via Intermediate Features Feedback Paper • 2507.02321 • Published Jul 3, 2025 • 39
Listener-Rewarded Thinking in VLMs for Image Preferences Paper • 2506.22832 • Published Jun 28, 2025 • 23
Inverse-and-Edit: Effective and Fast Image Editing by Cycle Consistency Models Paper • 2506.19103 • Published Jun 23, 2025 • 42
DreamBoothDPO: Improving Personalized Generation using Direct Preference Optimization Paper • 2505.20975 • Published May 27, 2025 • 36
Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models Paper • 2506.06751 • Published Jun 7, 2025 • 71
ImageReFL: Balancing Quality and Diversity in Human-Aligned Diffusion Models Paper • 2505.22569 • Published May 28, 2025 • 56
Will It Still Be True Tomorrow? Multilingual Evergreen Question Classification to Improve Trustworthy QA Paper • 2505.21115 • Published May 27, 2025 • 140
Exploring the Latent Capacity of LLMs for One-Step Text Generation Paper • 2505.21189 • Published May 27, 2025 • 61
I Have Covered All the Bases Here: Interpreting Reasoning Features in Large Language Models via Sparse Autoencoders Paper • 2503.18878 • Published Mar 24, 2025 • 119
One-Step Residual Shifting Diffusion for Image Super-Resolution via Distillation Paper • 2503.13358 • Published Mar 17, 2025 • 95
LLM-Microscope: Uncovering the Hidden Role of Punctuation in Context Memory of Transformers Paper • 2502.15007 • Published Feb 20, 2025 • 174
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? Paper • 2502.14502 • Published Feb 20, 2025 • 91
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity Paper • 2502.13063 • Published Feb 18, 2025 • 74