About Me
I am a Ph.D. student in Department of Computer Sciences at the University of Wisconsin-Madison, advised by Prof. Sharon Yixuan Li. Before joining Sharon’s group, I obtained my MS degree in Artificial Intelligence at the University of Seoul under supervision of Prof. Kyungwoo Song and Prof. Jiyoung Jung. I had the privilege of working with Zhi-Qi Cheng, Alexander Hauptmann, and David Mortensen during visiting at Carnegie Mellon University, and internship at NAVER AI Lab allows me to be advised by Dongyoon Han and Sangdoo Yun.
I am broadly interested in machine learning fundamentals and trustworthy AI. Recently, I have been focusing on understanding and improving the robustness of multimodal LLMs under distribution shifts and uncertainty quantification of LLM agents.
Selected Publications and Preprints
(* denotes equal contribution)
Refer to the Google Scholar and CV for the full publication list.
Understanding Language Prior of LVLMs by Contrasting Chain-of-Embedding
Lin Long*, Changdae Oh*, Seongheon Park, Yixuan Li
[paper] [code]
ArXiv preprint Sep. 2025General Exploratory Bonus for Optimistic Exploration in RLHF
Wendi Li, Changdae Oh, Yixuan Li
[paper] [code]
ArXiv preprint Sep. 2025Visual Instruction Bottleneck Tuning
Changdae Oh, Jiatong Li, Shawn Im, Yixuan Li
[paper] [code]
NeurIPS 2025
ICML 2025, Workshop on Reliable and Responsible Foundation Models (Oral Presentation; 6/176=3.4%)Understanding Multimodal LLMs Under Distribution Shifts: An Information-Theoretic Approach
Changdae Oh, Zhen Fang, Shawn Im, Xuefeng Du, Yixuan Li
[paper] [code]
ICML 2025
ICLR 2025, QUESTION Workshop (Oral Presentation)DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation
[paper][code]
Changdae Oh, Yixuan Li, Kyungwoo Song, Sangdoo Yun, Dongyoon Han
ICLR 2025
NeurIPS 2024, Workshop on Adaptive Foundation ModelsTowards Calibrated Robust Fine-Tuning of Vision-Language Models
Changdae Oh*, Hyesu Lim*, Mijoo Kim, Dongyoon Han, Sangdoo Yun, Jaegul Choo, Alexander Hauptmann, Zhi-Qi Cheng, Kyungwoo Song
[paper] [code]
NeurIPS 2024
NeurIPS 2023, Workshop on Distribution Shifts
- Geodesic Multi-Modal Mixup for Robust Fine-tuning
Changdae Oh*, Junhyuk So*, YongTaek Lim, Hoyoon Byun, Minchul Shin, Jong-June Jeon, Kyungwoo Song
[paper] [code]
NeurIPS 2023
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
[paper] [code]
Changdae Oh, Hyeji Hwang, Hee-young Lee, YongTaek Lim, Geunyoung Jung, Jiyoung Jung, Hosik Choi, Kyungwoo Song
CVPR 2023Learning Fair Representation via Distributional Contrastive Disentanglement
[paper] [code]
Changdae Oh, Heeji Won, Junhyuk So, Taero Kim, Yewon Kim, Hosik Choi, Kyungwoo Song
KDD 2022
Education
Ph.D. in Computer Science, University of Wisconsin-Madison
advisor: Prof. Sharon Yixuan Li
Sep. 2024 ~ PresentM.S. in Artificial Intelligence, University of Seoul
advisor: Prof. Kyungwoo Song and Prof. Jiyoung Jung
Mar. 2022 - Aug. 2024B.S. in Statistics, University of Seoul
Mar. 2016 - Feb. 2022
Experience
- Research Intern, NAVER AI Lab
Mentor: Dongyoon Han and Sangdoo Yun, Apr. 2023 ~ Aug. 2024- DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation, ICLR 2025
- Visiting Scholar / Research Collaboration, Carnegie Mellon University
Mentor: Zhi-Qi Cheng, Sep. 2023 ~ Feb. 2024- Towards Calibrated Robust Fine-Tuning of Vision-Language Model, NeurIPS 2024
- Mitigating the Linguistic Gap with Phonemic Representations for Robust Cross-lingual Transfer, EMNLP 2024 Workshop
Academic Services
- Conference Reviewer:
- NeurIPS 2025, 2024
- ICML 2025 (Top Reviewer)
- ICLR 2026, ICLR 2025
- AAAI 2025
- AISTATS 2026
- CVPR 2024
- Conference Volunteer: NeurIPS’24, KDD’22
- Journal Reviewer: TMLR, Neural Network