CXD Lab
In CXD Lab, we envision novel XR & AI systems
through Human-Computer Interaction (HCI) & UX Research
within the paradigm of Digital-Physical Convergence trends.
How should we incorporate digital intelligence more seamlessly into our physcial world that we are living in? CXD Lab explores, designs and evaluates novel AI·XR interactive UX solutions that could converge digital and physical worlds. We investigates various applications, including content creation, exhibition, creativity support, education, robotics, and smart environment.
CXD Lab은 XR과 AI가 혼재된 서비스가 우리의 삶에 더 자연스럽게 녹아들 수 있도록 사용자 경험 및 인터랙션을 새롭게 디자인하는 연구실입니다. 디지털 인텔리전스가 우리의 물리적 생활 환경에 자연스럽게 녹아들 수 있도록, 우리는 ‘인간-컴퓨터 상호작용(HCI)‘과 ‘인터랙션 디자인’ 관점을 바탕으로, 교육, 문화 예술, 창의력 지원 도구, 스마트홈 등 다양한 분야에서의 AI·XR 인터랙션의 가능성과 가치를 탐색하고, 이를 바탕으로 혁신적인 UX 및 시스템 디자인 솔루션을 제안합니다.
→ Research Areas → Research Methods → join us
Img

ACTIVE ONGOING PROJECTS (2025 ~)
  • Development of Creator-enhanced Interactions for Convergence Museums with XR & AI Technology (NRF)
  • AI Music Generation based on the Artwork Perception and Embodied Appreciation (NRF)
  • Designing Interactions for Embodied Knowledge Sharing Platform through Agent-based Digital Humans (NRF)
  • Spatial Sensory Interactions with Gen AI in XR for enhanced Learning and Touring (Yonsei)
  • Restrictive Embodied Interacitons in Physical-Digital Logging (NRF)
  • New Interfaces for Sustainable LLM AI

PUBLISHED PROJECTS
Here you can see our 6 most recent published projects. Click the ‘View All Projects’ button to discover all the other projects we’ve completed. To view unpublished ongoing projects, please feel free to contact us.
DanXeReflect: Bring Videos into XR Studio for Reflective Choreographic Collaboration
We introduce DanXeReflect, an XR system that transforms rehearsal videos into interactive dance studio for embodied and reflective choreography making. The system enables dancers to explore and communicate feedback through pose-based search, embodied revision, and body-anchored annotation.
TwistLens: Anticipation-Preserving Image Previews for Museum Experiences
TwistLens generates transformed artwork previews that communicate interpretive cues while preserving surprise. Guided by docent descriptions, EchoLens and DecoyLens help museums balance understanding with anticipation before visits.
PARASCENT: Motion‑based Parametric Viz to Digitally Communicate Perfume Scents
We introduce ParaScent, a parametric visualization toolkit that translates perfume attributes into motion-based visuals. By mapping scent qualities such as intensity, diffusion, and longevity to animated parameters, the system enables intuitive understanding of fragrances in digital environments.
PONIFY: Pose-based Painting Sonification for Empathetic Musical Artwork Perception
We introduce Ponify, a sonification method that translates the perceived dynamics of paintings into music through pose analysis. By extracting limb movements from human figures in artworks, Ponify maps visual motion cues to musical parameters such as tempo and density.
DESIGNATOMY: Tech-Driven Rapid Ideation Pedagogy for Tech-Novice Design Students
We introduce Designatomy, a taxonomy-driven design pedagogy that helps technology-novice students generate technology-inspired design ideas. The approach structures technological features into conceptual categories that support systematic ideation.
Design Implications of Generative AI and XR Interactions for 3D Product Design Tools
This paper envisions generative AI and XR as means to better novices in adopting dual roles as designers and users while personalizing 3D products. A generative design study in their homes examined how novices think and act while designing personal products.
→ View All Projects

RECENT NEWS
→ View All News & Essays

Are you interested in joining our lab?

Explore more about our research vision, key interests, frequently used methodologies, and application process. We welcome highly motivated students eager to innovate and shape the future through design and technology. :)

→ Our Research Areas → Research Methods → Join Us → FAQ