Human Computer Interaction (HCI)

TOC


HCI Group is focusing on technologies and resources that are related to human-computer interaction. Our current projects are indoor positioning with inertial sensors, dialogue systems with emotions, dance performance analysis using deep learning techniques, and the work assisting system with a head-mounted display.

Dance Performance Analysis using Multi-modal Sensors

research-hitoshi

This research investigates a new approach to support ballroom dance exercise with “dance figure” recognition and feedbacks for each figure respectively. Dance figures are groups of steps, each of which has its name and choreography. The research develops a multi-modal figure recognition method with machine/deep learning using video and wearable sensors, which has brought an accurate recognition method robust against occlusions.

Adaptive Dialogue System with Emotional Reply

research-shinsan

Chatbots, which have become popular in recent years, can be applied to cognitive behavioral therapies such as mental health care and therapy, and are expected to play a role as a digital partner that is close to humans in the future. In this research, we focus on the emotional factors necessary for humans and dialogue systems to form social relationships, and implement a dialogue system that extracts semantic information and emotional elements from human speech, applies appropriate emotions to the situation, and uses them to generate responses.

Indoor localization using Smartphone Inertial Sensors

research-takuto

Pedestrian Dead Reckoning (PDR) is one of the promising technology for indoor localization. However, conventional PDR algorithm has the major difficulty in robust estimation against variation of gait and device position. Therefore, we propose adapting deep learning to PDR to estimate robust pedestrian trajectory from inertial sensors embedded in smartphones.

Work Assistance System with Mixed Reality

research-linda

This research aims to visually support picking operations in logistics sites using mixed reality devices. We are investigating gaze-based user interfaces, navigation, and vision enhancement to eliminate blind spots.