About Me

I am a Ph.D. student in Department of Computer Sciences at the University of Wisconsin-Madison, advised by Prof. Sharon Yixuan Li. Before joining Sharon’s group, I obtained my MS degree in Artificial Intelligence at the University of Seoul under supervision of Prof. Kyungwoo Song and Prof. Jiyoung Jung. I had the privilege of working with Zhi-Qi Cheng, Alexander Hauptmann, and David Mortensen during visiting at Carnegie Mellon University, and internship at NAVER AI Lab allows me to be advised by Dongyoon Han and Sangdoo Yun.

I am broadly interested in machine learning fundamentals and AI safety & reliability. Recently, I have focused on understanding and improving the robustness of vision-language models under distribution shifts.

News

May 2025, Selected as a Top Reviewer at ICML 2025!
May 2025, New preprint Vittle is out!
May 2025, Our UnderstandingMLLM-DistShift paper got accepted to ICML 2025!
Jan 2025, Our DaWin paper got accepted to ICLR 2025!
Sep 2024, Our CaRot paper got accepted to NeurIPS 2024!
Aug 2024, Join UW-Madison CS as a PhD student!

Selected Publications and Preprints

(* denotes equal contribution)
Refer to the Google Scholar and CV for the full publication list.

  • Visual Instruction Bottleneck Tuning
    Changdae Oh, Jiatong Li, Shawn Im, Yixuan Li
    [paper]
    Preprint 2025

  • Understanding Multimodal LLMs Under Distribution Shifts: An Information-Theoretic Approach
    Changdae Oh, Zhen Fang, Shawn Im, Xuefeng Du, Yixuan Li
    [paper]
    ICML 2025
    ICLR 2025, Spotlight @ QUESTION Workshop

  • DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation
    [paper][code]
    Changdae Oh, Yixuan Li, Kyungwoo Song, Sangdoo Yun, Dongyoon Han
    ICLR 2025
    NeurIPS 2024, Workshop on Adaptive Foundation Models

  • Towards Calibrated Robust Fine-Tuning of Vision-Language Models
    Changdae Oh*, Hyesu Lim*, Mijoo Kim, Dongyoon Han, Sangdoo Yun, Jaegul Choo, Alexander Hauptmann, Zhi-Qi Cheng, Kyungwoo Song
    [paper] [code]
    NeurIPS 2024
    NeurIPS 2023, Workshop on Distribution Shifts

  • Geodesic Multi-Modal Mixup for Robust Fine-tuning
    Changdae Oh*, Junhyuk So*, YongTaek Lim, Hoyoon Byun, Minchul Shin, Jong-June Jeon, Kyungwoo Song
    [paper] [code]
    NeurIPS 2023
  • BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
    [paper] [code]
    Changdae Oh, Hyeji Hwang, Hee-young Lee, YongTaek Lim, Geunyoung Jung, Jiyoung Jung, Hosik Choi, Kyungwoo Song
    CVPR 2023

  • Learning Fair Representation via Distributional Contrastive Disentanglement
    [paper] [code]
    Changdae Oh, Heeji Won, Junhyuk So, Taero Kim, Yewon Kim, Hosik Choi, Kyungwoo Song
    KDD 2022

Education

Experience

  • Research Intern, NAVER AI Lab
    Mentor: Dongyoon Han and Sangdoo Yun, Apr. 2023 ~ Aug. 2024
    • DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation, ICLR 2025
  • Visiting Scholar / Research Collaboration, Carnegie Mellon University
    Mentor: Zhi-Qi Cheng, Sep. 2023 ~ Feb. 2024
    • Towards Calibrated Robust Fine-Tuning of Vision-Language Model, NeurIPS 2024
    • Mitigating the Linguistic Gap with Phonemic Representations for Robust Cross-lingual Transfer, EMNLP 2024 Workshop

Academic Services

  • Conference Reviewer: NeurIPS’25, ICML’25 (Top Reviewer), ICLR’25, AAAI’25, NeurIPS’24, CVPR’24
  • Conference Volunteer: NeurIPS’24, KDD’22
  • Journal Reviewer: Neural Network’25