Home About
About
Cancel

About

Hello! I am a Ph.D student at KAIST AI, advised by Jaegul Choo.
Previously, I received M.S at KAIST, and B.S at Korea University.

My research aims to create intelligent systems that can continually learn, adapt, and generalize in dynamic environments. To do so, I am interested in self-supervised learning, reinforcement learning, and its applications to gaming and robotics.

Email  ·  CV  ·  Scholar  ·  Github
Last updated: Feb 07, 2024

News

- Mar-Aug 2024: I will be joining a Gran Turismo team at Sony AI, Tokyo, as a research intern.
- Sep 2023: Two reinforcement learning papers got accepted in NeurIPS'23.

Publications

Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks
Hojoon Lee, Hyeonseo Cho, Hyunseung Kim, Donghu Kim, Dugki Min, Jaegul Choo, Clare Lyle
Preprint.

To allow the network to continually adapt and generalize, we introduce Hare and Tortoise architecture, inspired by the complementary learning system of the human brain.

Investigating Pre-Training Objectives for Generalization in Visual Reinforcement Learning
Donghu Kim*, Hojoon Lee*, Kyungmin Lee*, Dongyoon Hwang, Jaegul Choo
Preprint.

We investigate which pre-training objectives are beneficial for in-distribution, near-out-of-distribution, and far-out-of-distribution generalization in visual reinforcement learning.

PLASTIC: Enhancing Input and Label Plasticity for Sample Efficient Reinforcement Learning
Hojoon Lee*, Hanseul Cho*, Hyunseung Kim*, ..., Chulhee Yun.
NeurIPS'23.
arXiv / code / slide / poster

We introduce PLASTIC, RL algorithm that enhances sample-efficiency by simultaneously preserving the model's input & label plasticity.

DISCO-DANCE: Learning to Discover Skills through Guidance
Hyunseung Kim*, Byungkun Lee*, Hojoon Lee, ..., Jaegul Choo.
NeurIPS'23.
project page / arXiv / code

We introduce DISCO-DANCE, an Unsupervised Skill Discovery algorithm designed to encourage exploration by a direct guidance.

On the Importance of Feature Decorrelation for Unsupervised Representation Learning for Reinforcement Learning
Hojoon Lee, Koanho Lee, Dongyoon Hwang, Hyunho Lee, Byungkun Lee, Jaegul Choo.
ICML'23.
arXiv / code / poster

We introduce an offline self-predictive learning algorithm for reinforcement learning.

ST-RAP: A Spatio-Temporal Framework for Real Estate Appraisal
Hojoon Lee*, Hawon Jeong*, Byungkun Lee*, Kyungyup Lee, Jaegul Choo.
CIKM'23 (short).
arXiv / code / poster

We construct a real estate appraisal framework that integrates spatial and temporal aspects.

Towards Validating Long-Term User Feedbacks in Interactive Recommender System
Hojoon Lee, Dongyoon Hwang, Kyusik Min, Jaegul Choo.
SIGIR'22 (short), Honorable Mention Award.
arXiv / poster

We analyze the limitations of existing interactive recommender systems' benchmarks.

DraftRec: Personalized Draft Recommendation for Winning in MOBA Games
Hojoon Lee*, Dongyoon Hwang*, HyunSeung Kim, Byungkun Lee, Jaegul Choo.
WWW'22.
arXiv / code / poster

We introduce DraftRec, personalized champion recommendation system for League of Legends which utilizes a hierarchical transformer architecture.

Enemy Spotted: In-game Gun Sound Dataset for Gunshot Classification and Localization
Junwoo Park, Youngwoo Cho, Gyuhyeon Sim, Hojoon Lee, Jaegul Choo.
COG'22.
arXiv

We construct the in-game gunshot sound dataset which can enhance the accuracy of real‑world firearm classification.