Close Menu
SkytikSkytik

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    SkytikSkytik
    • Home
    • AI Tools
    • Online Tools
    • Tech News
    • Guides
    • Reviews
    • SEO & Marketing
    • Social Media Tools
    SkytikSkytik
    Home»AI Tools»[2510.09885] Closing the Data-Efficiency Gap Between Autoregressive and Masked Diffusion LLMs
    AI Tools

    [2510.09885] Closing the Data-Efficiency Gap Between Autoregressive and Masked Diffusion LLMs

    AwaisBy AwaisJanuary 29, 2026No Comments2 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Measuring Intelligence Efficiency of Local AI
    Share
    Facebook Twitter LinkedIn Pinterest Email

    [Submitted on 10 Oct 2025 (v1), last revised 28 Jan 2026 (this version, v3)]

    View a PDF of the paper titled Closing the Data-Efficiency Gap Between Autoregressive and Masked Diffusion LLMs, by Xu Pan and 4 other authors

    View PDF
    HTML (experimental)

    Abstract:Large language models (LLMs) are often used in environments where facts evolve, yet factual knowledge updates via fine-tuning on unstructured text often suffers from 1) reliance on compute-heavy paraphrase augmentation and 2) the reversal curse. Recent studies show diffusion large language models (dLLMs) require fewer training samples to achieve lower loss in pre-training and are more resistant to the reversal curse, suggesting dLLMs may learn new knowledge more easily than autoregressive LLMs (arLLMs). We test this hypothesis in controlled knowledge fine-tuning experiments and find that while arLLMs rely on paraphrase augmentation to generalize knowledge text into question-answering (QA) capability, dLLMs do not require paraphrases to achieve high QA accuracy. To further investigate whether the demasking objective alone can induce such a knowledge injection advantage in dLLMs regardless of their diffusion denoising paradigm, we propose masked fine-tuning for arLLMs, which prompts an arLLM to reconstruct the original text given a masked version in context. The masked fine-tuning for arLLMs substantially improves the efficacy of knowledge injection, i.e. no paraphrase needed and resistant to the reversal curse, closing the gap between arLLMs and dLLMs. We also demonstrate that the same demasking objective improves supervised fine-tuning (SFT) on math tasks over standard SFT, suggesting broader applicability of the demasking objective.

    Submission history

    From: Xu Pan [view email]
    [v1]
    Fri, 10 Oct 2025 21:43:50 UTC (8,094 KB)
    [v2]
    Sat, 29 Nov 2025 20:03:40 UTC (11,065 KB)
    [v3]
    Wed, 28 Jan 2026 03:44:41 UTC (11,765 KB)

    Autoregressive Closing DataEfficiency Diffusion Gap LLMs Masked
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Awais
    • Website

    Related Posts

    [2603.19461] Hyperagents

    March 23, 2026

    [2603.04803] Guiding Diffusion-based Reconstruction with Contrastive Signals for Balanced Visual Representation

    March 23, 2026

    Building a Navier-Stokes Solver in Python from Scratch: Simulating Airflow

    March 22, 2026

    Prompt Caching with the OpenAI API: A Full Hands-On Python tutorial

    March 22, 2026

    Escaping the SQL Jungle | Towards Data Science

    March 21, 2026

    A Gentle Introduction to Nonlinear Constrained Optimization with Piecewise Linear Approximations

    March 21, 2026
    Leave A Reply Cancel Reply

    Top Posts

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 20250 Views

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 20250 Views

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 20250 Views
    Don't Miss

    Why Technical Expertise Alone Won’t Cut It Anymore

    March 23, 2026

    The SEO industry has spent the last couple of decades perfecting the art of looking…

    Answer engine optimization case studies that prove the ROI of AEO in 2026

    March 23, 2026

    Green Eggs and Ham Frittata Recipe

    March 23, 2026

    Asparagus and Marinated Artichoke Galette Recipe

    March 23, 2026
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Google Tested AI Headlines In Discover. Now It’s Testing Them In Search

    March 23, 2026

    47 Easter Dinner Ideas: Mains, Sides, and Salads for Your Holiday Feast

    March 22, 2026
    Most Popular

    13 Trending Songs on TikTok in Nov 2025 (+ How to Use Them)

    November 18, 20257 Views

    How to watch the 2026 GRAMMY Awards online from anywhere

    February 1, 20263 Views

    Corporate Reputation Management Strategies | Sprout Social

    November 19, 20252 Views
    Our Picks

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer

    © 2025 skytik.cc. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.