Hello! I am a Ph.D student at KAIST AI, advised by Jaegul Choo.
Previously, I received M.S at KAIST, and B.S at Korea University. During the 1st half of 2024, I interned in the RL team at Sony AI, mentored by Takuma Seno, Kaushik Subramanian, and Peter Stone. In winter 2022, I interned at RL team at Kakao Enterprise, working closely with Kyushik Min. In summer 2020, I interned at RL team at Neowiz Games, focusing on turn-based strategy game, BrownDust.
Currently, my main interest lies in creating embodied AI that can continually learn, adapt, and generalize in dynamic environments. If you want to discuss anything research related, please feel free to reach me :)
Email  /  CV  /  Google Scholar  /  LinkedIn  /  Github
Designed network architectures that steer convergence toward simple functions which allows to scale up parameters in RL.
We present DoDont, a skill discovery algorithm that learns diverse behaviors while following the behaviors in "do" videos while avoiding the behaviors in "don't" videos.
To allow the network to continually adapt and generalize, we introduce Hare and Tortoise architecture, inspired by the complementary learning system of the human brain.
We investigate which pre-training objectives are beneficial for in-distribution, near-out-of-distribution, and far-out-of-distribution generalization in visual reinforcement learning.
Introducing an add-on convolution module for pre-trained ViT models, enhancing adaptability for vision-based motor control with injecting locality and translation equivariant biases.
For sample-efficient RL, the agent needs to quickly adapt to various inputs (input plasticity) and outputs (label plasticity). We present PLASTIC, which maintains both input and label plasticity by identifying smooth local minima and preserving gradient flow.
We introduce DISCO-DANCE, a Skill Discovery algorithm focused on learning diverse, task-agnostic behaviors. DISCO-DANCE addresses the common limitation of exploration in skill discovery algorithms through explicit guidance.
We present a visual pre-training algorithm grounded in self-predictive learning principles tailored for reinforcement learning.
We construct a real estate appraisal framework that integrates spatial and temporal aspects, validated using a dataset of 3.6M real estate transactions in South Korea from 2016 to 2020.
Through validating the long-term impact of user feedback in MovieLens and Amazon Review datasets, we've discovered that these datasets are inadequate for evaluating reinforcement learning-based interactive recommender systems.
We gathered data from 280,000 matches played by the top 0.3% rank players in Korea for League of Legends. From this, we developed DraftRec, a personalized champion recommendation system aimed at maximizing players' win rates.
We collected a gunshot sound dataset from PlayerUnknown's Battlegrounds (PUBG). Using this data, we developed a gunshot localization model that can be applied to real-world scenarios.
We've developed an AI agent for the turn-based strategy game BrownDust. To enhance learning efficiency, we've designed feature representations and specified architectures for embedding the game characters.
Template based on Jon Barron's website.