Abstract
Masked diffusion language models (MDLMs) promise fast, non-autoregressive text generation, yet existing samplers, which pick tokens to unmask based on model confidence, ignore interactions when unmasking multiple positions in parallel and effectively reduce to slow, autoregressive behavior. We propose the Dilated Unmasking Scheduler (DUS), an inference-only, planner-model-free method that partitions sequence positions into non-adjacent dilated groups and unmasked them in parallel so as to minimize an upper bound on joint entropy gain at each denoising step. By explicitly trading off the number of network calls against generation quality, DUS recovers most of the performance lost under traditional parallel unmasking strategies. Across math (GSM8K, MATH500), code (HumanEval, MBPP) and general‐knowledge benchmarks (BBH, MMLU-Pro), DUS outperforms confidence‐based planners-without modifying the underlying denoiser, and reveals the true speed-quality frontier of MDLMs.
Diffusion LMs & Dilated Unmasking Scheduler (DUS)
⛓️💥 Why Discrete Diffusion for LLMs?
Modern large‐scale LLMs almost universally use autoregressive (AR) decoding-predicting one token at a time in strict left-to-right order. While AR yields high local fidelity, it is subject to error accumulation and enforces $G$ sequential denoiser calls for a length-$G$ output, under-utilizing today's massively parallel hardware.
By contrast, masked diffusion treats the entire sequence as a latent "noisy" mask and gradually unmasks tokens over a small number of denoising passes. In principle this supports any-order token revelations and fully parallel updates-trading off the number of passes (and thus latency) against generation fidelity.
🔎 The AR-Equivalent "Planner"
Almost all existing diffusion samplers collapse back to AR speed and quality by unmasking one token per step, using denoiser confidence or entropy to pick the next index. In effect, the denoiser becomes an implicit planner, but it still:
- Ignores interactions between multiple tokens unmasked in the same step.
- Fails to account for how revealing $x_i$ would change the uncertainty of $x_j$ if both are revealed together.
As soon as you try to unmask more than one token at once, quality plummets.
⏱️ Our Dilated Unmasking Scheduler (DUS)
We introduce DUS, a model-agnostic, planner-model-free inference scheduler that requires no extra training or changes to the denoiser.
1. Dilated Partitioning
- Let $G$ be the sequence length. Set $$K = \lceil\log G\rceil \text{ and } \{C_1, \ldots, C_K\}$$ be a partition of the $G$ positions into $K$ non-adjacent groups.
- Each candidate group $C_k$ has on average $\frac{G}{K}$ tokens with minimal pairwise dependencies.
2. Parallel Unmasking
For $k = 1, \ldots, K$:
- Unmask all tokens in $C_k$ simultaneously.
- Run one pass of the denoiser over the full sequence.
3. Entropy-Bound Justification
Denote by $s_t$ the current partial state at iteration $t$ (i.e., which tokens have already been revealed and which remain masked). Under a one-order fast-mixing Markov chain on token positions, non-adjacent tokens exhibit negligible mutual information, so
$$H(x_{C_k} | s_t) \approx \sum_{i \in C_k} H(x_i | s_t)$$Hence, grouping non-adjacent tokens in each $C_k$ controls the maximum quality loss per unmasking step.
4. Speed–Quality Trade-off
- AR baseline: $G$ denoiser calls (one per token).
- DUS: $K = \lceil\log G\rceil$ calls ⇒ $\approx \frac{G}{\log G}$× speedup.
- Empirical result: DUS preserves up to 25% of quality even at 5×–10× fewer passes.
By explicitly managing the number of unmasking steps via a dilated schedule, DUS uncovers the true speed–quality frontier of masked diffusion LMs-delivering sub-linear, any-order generation at practical fidelity.
Interactive Demos
Explore step-by-step unmasking and see how DUS vs. confidence planners work in your browser.
💻 DiffuCoder-Instruct on MBPP
🧮 LLaDA-Base on GSM8K
Note: Non-changing text represents post-EOS tokens unmasked by planners but not shown in the demo.
Benchmarks
We evaluate DUS across math (GSM8K, MATH500), code (HumanEval, MBPP), and general knowledge benchmarks (BBH, MMLU-Pro), with additional datasets and ablations detailed in the full paper.