Our lab envisions a transformation in the way we interact with technology through Heads-Up Computing. This new interaction paradigm is built on a wearable platform that includes modules for the head, hand, and body. These modules distribute the input and output capabilities of the device to align with human input and output channels. Heads-Up Computing proposes a voice and gesture multimodal interaction approach. Once realized, this could significantly alter our work and daily lives. Imagine attending to and manipulating information seamlessly during various indoor and outdoor activities such as cooking, exercising, hiking, or socializing. 


Related publications: https://www.synteraction.org/publications/