Hojoon Lee

logo_uni_tue          logo_uni_tue

Hello! I am a Ph.D student at KAIST AI, advised by Jaegul Choo.

Previously, I received M.S at KAIST, and B.S at Korea University. During the 1st half of 2024, I interned in the RL team at Sony AI, mentored by Takuma Seno, Kaushik Subramanian, and Peter Stone. In winter 2022, I interned at RL team at Kakao Enterprise, working closely with Kyushik Min. In summer 2020, I interned at RL team at Neowiz Games, focusing on turn-based strategy game, BrownDust.

Currently, my main interest lies in creating embodied AI that can continually learn, adapt, and generalize in dynamic environments. If you want to discuss anything research related, please feel free to reach me :)

Email  /  CV  /  Google Scholar  /  LinkedIn  /  Github

profile photo

News


Publications

neurips2024dodont
Reinforcement Learning
SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning
Hojoon Lee, Dongyoon Hwang, Donghu Kim, Hyunseung Kim, Jun Jet Tai, Kaushik Subramanian, Peter R. Wurman, Jaegul Choo, Peter Stone, Takuma Seno.
Preprint.
arXiv

Designed network architectures that steer convergence toward simple functions which allows to scale up parameters in RL.

neurips2024dodont
Reinforcement Learning Skill Discovery
Do’s and Don’ts:Learning Desirable Skills with Instruction Videos
Hyunseung Kim, Byungkun Lee, Hojoon Lee, Dongyoon Hwang, Donghu Kim, Jaegul Choo,
NeurIPS'24.
project page / arXiv

We present DoDont, a skill discovery algorithm that learns diverse behaviors while following the behaviors in "do" videos while avoiding the behaviors in "don't" videos.

icml2024hnt
Reinforcement Learning Plasticity
Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks
Hojoon Lee, Hyeonseo Cho, Hyunseung Kim, Donghu Kim, Dugki Min, Jaegul Choo, Clare Lyle.
ICML'24.
arXiv / poster / Bibtex

To allow the network to continually adapt and generalize, we introduce Hare and Tortoise architecture, inspired by the complementary learning system of the human brain.

icml2024atari-pb
Reinforcement Learning Pre-training
ATARI-PB: Investigating Pre-Training Objectives for Generalization in Vision-Based RL
Donghu Kim*, Hojoon Lee*, Kyungmin Lee*, Dongyoon Hwang, Jaegul Choo.
ICML'24.
project page / arXiv / poster / Bibtex

We investigate which pre-training objectives are beneficial for in-distribution, near-out-of-distribution, and far-out-of-distribution generalization in visual reinforcement learning.

icml2024coin
Reinforcement Learning Adaptation
Adapting Pretrained ViTs with Convolution Injector for Visuo-Motor Control
Dongyoon Hwang*, Byungkun Lee*, Hojoon Lee, Hyunseung Kim, Jaegul Choo.
ICML'24.
project page / arXiv / Bibtex

Introducing an add-on convolution module for pre-trained ViT models, enhancing adaptability for vision-based motor control with injecting locality and translation equivariant biases.

neurips2023plastic
Reinforcement Learning Plasticity
PLASTIC: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning
Hojoon Lee*, Hanseul Cho*, Hyunseung Kim*, Daehoon Gwak, Joonkee Kim, Jaegul Choo, Se-Young Yun, Chulhee Yun.
NeurIPS'23.
arXiv / code / slide / poster / Bibtex

For sample-efficient RL, the agent needs to quickly adapt to various inputs (input plasticity) and outputs (label plasticity). We present PLASTIC, which maintains both input and label plasticity by identifying smooth local minima and preserving gradient flow.

neurips2023disco-dance
Reinforcement Learning Skill Discovery
DISCO-DANCE: Learning to Discover Skills through Guidance
Hyunseung Kim*, Byungkun Lee*, Hojoon Lee, Dongyoon Hwang, Jaegul Choo.
NeurIPS'23.
project page / arXiv / code / Bibtex

We introduce DISCO-DANCE, a Skill Discovery algorithm focused on learning diverse, task-agnostic behaviors. DISCO-DANCE addresses the common limitation of exploration in skill discovery algorithms through explicit guidance.

icml2023simtpr
Reinforcement Learning Pre-training
SimTPR: On the Importance of Feature Decorrelation for Unsupervised Representation Learning for Reinforcement Learning
Hojoon Lee, Koanho Lee, Dongyoon Hwang, Hyunho Lee, Byungkun Lee, Jaegul Choo.
ICML'23.
arXiv / code / poster / Bibtex

We present a visual pre-training algorithm grounded in self-predictive learning principles tailored for reinforcement learning.

cikm2023strap
Data Mining
ST-RAP: A Spatio-Temporal Framework for Real Estate Appraisal
Hojoon Lee*, Hawon Jung*, Byungkun Lee*, Jaegul Choo.
CIKM'23 (short).
arXiv / code / poster / Bibtex

We construct a real estate appraisal framework that integrates spatial and temporal aspects, validated using a dataset of 3.6M real estate transactions in South Korea from 2016 to 2020.

sigir2022irs
Data Mining Reinforcement Learning
Towards Validating Long-Term User Feedbacks in Interactive Recommender System
Hojoon Lee, Dongyoon Hwang, Kyushik Min, Jaegul Choo.
SIGIR'22 (short), Best Honorable Mention Award.
arXiv / poster / Bibtex

Through validating the long-term impact of user feedback in MovieLens and Amazon Review datasets, we've discovered that these datasets are inadequate for evaluating reinforcement learning-based interactive recommender systems.

www2022draftrec
Data Mining Reinforcement Learning Game
DraftRec: Personalized Draft Recommendation for Winning in MOBA Games
Hojoon Lee*, Dongyoon Hwang*, Hyunseung Kim, Byungkun Lee, Jaegul Choo.
WWW'22.
arXiv / code / poster / Bibtex

We gathered data from 280,000 matches played by the top 0.3% rank players in Korea for League of Legends. From this, we developed DraftRec, a personalized champion recommendation system aimed at maximizing players' win rates.

cog2022gunshot
Data Mining Game
Enemy Spotted: In-game Gun Sound Dataset for Gunshot Classification and Localization
Junwoo Park, Youngwoo Cho, Gyuhyeon Sim, Hojoon Lee, Jaegul Choo.
COG'22.
arXiv / Bibtex

We collected a gunshot sound dataset from PlayerUnknown's Battlegrounds (PUBG). Using this data, we developed a gunshot localization model that can be applied to real-world scenarios.

korea2019browndust
Reinforcement Learning Game
Conquering a rule-changing game with action-releavance aware Alpha-Zero
Hojoon Lee, Dongyoon Hwang, Jeesoo Woo, Jaegul Choo.
Preprint'19.
poster

We've developed an AI agent for the turn-based strategy game BrownDust. To enhance learning efficiency, we've designed feature representations and specified architectures for embedding the game characters.


Other activities

Reviewing activities
  • Serving as a reviewer for NeurIPS'23, ICML'24, ICLR'24.
Awards
  • Travel Award ($3,000 as awards), Crevisse Partners, 2023.
  • SIGIR Best Short Paper Honorable Mention, 2022.
  • Korea Government Full Scholarship ($10,000 per year), 2020, 2021.
  • Silver Prize ($2,000 as awards), Korea University Graduation Project, 2019.
  • College Scholarship ($4,000 as awards), Seongnam Scholarship Foundation, 2017.
  • General Paik Sun Yup Leadership Award , LTG Thomas.S.Vandal, U.S Army, 2017.

Talks

Template based on Jon Barron's website.