Yongjin Han
Seoul, South Korea
My research goal is to understand model behavior through adversarial noise, and to leverage this understanding to achieve certified/adversarial robustness. Although worst-case adversarial noise can induce unintended behaviors in AI models, it also provides a valuable signal for learning more robust and interpretable representations. However, its role in shaping model behaviors remains poorly understood.
Previously, I studied software verification/analysis, cybersecurity, and compiler optimization to understand how software operates and develop reliable software. Building on these experiences, my research interests continue to focus on ensuring the reliability and safety of software systems, now extended to modern AI models.
In parallel, I am also exploring efficient AI approaches such as PEFT, quantization, pruning, since security mechanisms often compromise user availability and practicality. Ultimately, I aim to balance robustness and efficiency in AI, enabling users to deploy trustworthy AI technologies in real-world applications.
I am currently working with Prof. Suhyun Kim, who provides valuable guidance as I pursue independent research with strong self-motivation.
I earned a Master’s degree in Computer Science at University of California, Davis advised by Prof. Ian Davidson. Under supervision of Prof. Davidson, I studied deep fair clustering using constraint programing (CP) and learned that CP can be effectively apllied to ML systems. I obtained my Bachelor’s degree in Computer Science and Engineering at Dongguk University. Prof. Yunsik Son first introduced me to programming language, secure software and related research area.
In my free time, I usually cook or watch movies.
Recent works
- Lipschitz-aware Linearity Grafting for Certified RobustnessarXiv preprint arXiv:2510.25130, 2025