
Li Dong - Homepage
Self-Boosting Large Language Models with Synthetic Preference Data Qingxiu Dong#, Li Dong, Xingxing Zhang, Zhifang Sui, Furu Wei. International Conference on Learning Representations (ICLR), 2025.
- [PDF]
Presentation - Dong
Q3: How do we define the sentiment structure? The movie is
UniLM Pre-training The model parameters are shared across the LM objectives (i.e., bidirectional LM, unidirectional LM, and sequence-to-sequence LM). We use different self-attention masks to control the access to context for each word token. The right-to-left LM is similar to the left-to-right one, which is omitted in the figure for brevity.
Jichang Zhao , Li Dong , Junjie Wu†, and Ke Xu‡ State Key Lab of Software Development Environment, Beihang University †Beijing Key Laboratory of Emergency Support Simulation Technologies for City Operations, School of Economics and Management, Beihang University ‡Corresponding author