Autoregressive Direct Preference Optimization
Abstract
Direct preference optimization (DPO) has emerged as a promising approach for aligning large language models (LLMs) with human preferences. However, the widespread reliance on the response-level Bradley-Terry (BT) model may limit its full potential, as the reference and learnable models are assumed to be autoregressive only after deriving the objective function. Motivated by this limitation, we revisit the theoretical foundations of DPO and propose a novel formulation that explicitly introduces the autoregressive assumption prior to applying the BT model. By reformulating and extending DPO, we derive a novel variant, termed Autoregressive DPO (ADPO), that explicitly integrates autoregressive modeling into the preference optimization framework. Without violating the theoretical foundations, the derived loss takes an elegant form: it shifts the summation operation in the DPO objective outside the log-sigmoid function. Furthermore, through theoretical analysis of ADPO, we show that there exist two length measures to be considered when designing DPO-based algorithms: the token length μ and the feedback length μ'. To the best of our knowledge, we are the first to explicitly distinguish these two measures and analyze their implications for preference optimization in LLMs.
Method Overview
ADPO Framework: Our method explicitly integrates autoregressive modeling into the preference optimization framework, shifting the summation operation outside the log-sigmoid function for improved theoretical consistency.
Granularity Families: We define two granularity families (i.e., how to decompose a response into tokens) in ADPO: Static Family (b-d) and Adaptive Family (e-g). Static Family divides responses using a fixed window size k, while Adaptive Family dynamically determines the window size by decomposing responses into m segments.
Main Results
Main Results: ADPO demonstrates consistent improvements across multiple mathematic reasoning benchmarks, validating our theoretical contributions and showing the practical benefits of autoregressive preference optimization. Granularity analysis reveals that finer granularity can lead to better performance.
BibTeX
@misc{oi2026autoregressivedirectpreferenceoptimization,
title={Autoregressive Direct Preference Optimization},
author={Masanari Oi and Mahiro Ukai and Masahiro Kaneko and Naoaki Okazaki and Nakamasa Inoue},
year={2026},
eprint={2602.09533},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2602.09533},
}