*: indicating equal contribution or alphabetic ordering.
Working paper
- A Kernel-Based View of Language Model Fine-Tuning
Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora.
Publications
- Fast Mixing of Stochastic Gradient Descent with Normalization and Weight Decay
Zhiyuan Li*, Tianhao Wang*, Dingli Yu*.
NeurIPS 2022
- New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
Arushi Gupta*, Nikunj Saunshi*, Dingli Yu*, Kaifeng Lyu, Sanjeev Arora.
NeurIPS 2022 (Oral)
- Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Sanjeev Arora*, Simon S. Du*, Zhiyuan Li*, Ruslan Salakhutdinov*, Ruosong Wang*, and Dingli Yu*.
ICLR 2020 (Spotlight): Proc. 8th International Conference on Learning Representations, 2020.
- Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee
Wei Hu*, Zhiyuan Li*, and Dingli Yu*.
ICLR 2020: Proc. 8th International Conference on Learning Representations, 2020.
- Characterization of group-strategyproof mechanisms for facility location in strictly convex space
Pingzhong Tang*, Dingli Yu*, and Shengyu Zhao*.
EC-20: Proc. 21st ACM Conference on Economics and Computation, 2020.
- Balanced Outcomes in Wage Bargaining
Pingzhong Tang*, and Dingli Yu*.
AAMAS-18: Proc. 17th International Conference on Autonomous Agents and MultiAgent Systems, July 2018.
- Fair Rent Division on a Budget.
Ariel D. Procaccia*, Rodrigo A. Velez*, and Dingli Yu*.
AAAI-18: Proc. 32nd AAAI Conference on Artificial Intelligence, Feb 2018.
Manuscripts
- Enhanced Convolutional Neural Tangent Kernels
Zhiyuan Li*, Ruosong Wang*, Dingli Yu*, Simon S. Du, Wei Hu, Ruslan Salakhutdinov, and Sanjeev Arora.
- A Partisan Districting Protocol with Provably Nonpartisan Outcomes.
Wesley Pegden*, Ariel D. Procaccia*, and Dingli Yu*.
Press coverage: The Washington Post (op-ed), Slate, Pittsburgh Post-Gazette, New Scientist, Axios, WDET (radio), Wisconsin Public Radio