ITA: Interactive Technologies for Accessibility is an ACIM Research Cluster spearheaded by Dr. Kening Zhu and Dr. Miu Ling Lam. The team aims to create new interactive techniques to improve accessibility. The three directors are experts in the fields of computer graphics, human-computer interaction, robotics, computational imaging, and tangible user interfaces.
The research objectives are:
- To develop innovative interactive technologies to help people with various accessibility problems (e.g., children, elderly, people with disabilities, etc.);
- To augment the capability of non-experienced users on professional computational tasks (e.g., rapid hardware prototyping, 3D modeling and animation) through interactive technologies
- To evaluate the task effectiveness and the user experience of these technologies
- To promote social empathy towards in-need communities through art-tech education
This project (ITA 3.0) is to expand and deepen our previous ACIM projects: ITA and ITA 2.0. During the project period (2017 - 2022), we have developed a series of technologies and prototypes to enhance people’s accessibility to digital information, including AR technologies for 3D content creation, smart ring, VR gloves, and white cane with thermotactile feedbacks, tangible programming blocks for visually-impaired children, text-entry technique on the fingertip for smart devices, bezel-initiated swipe interaction for round smartwatches, and so on. With these results, we received external grants with a total amount of over HKD 20 million (e.g., GRF, NSFC, Jockey Club donation, and so on), published over 70 papers, initiated more than 10 patent applications, and presented several art exhibitions.
For ITA 3.0, we propose to promote the social understanding of accessibility and inclusion through art and technology by developing novel art-tech toolkits and deploying art-tech educational workshops to the youth. In addition, we continue to leverage the smart technologies, such as machine/deep learning and smart devices (e.g., smartphones, watches, wearables, sensors, and so on), to improve the life for people with various accessibility problems. We aim to improve the accessibility of digital content through multimodal and embodied interaction, such as hand gestures, body postures, voices, and haptic feedback.
PROJECTS
Impression Machine is a media art project based on robotics and kinetic photography. In this project, we have developed a new system and used a highly experimental approach to create a new visual experience for the audience. The work is embodied as a "performative" installation comprising a 6-axis robot arm, a digital camera, a computer and two screens (a 60" TV and a 3m wide projection). The system/machine/installation is performing while the audience is invited to observe a sequence of events, then comprehend the reasons and meanings behind the abstract visuals presented.
The camera is mounted on the robot arm, moving in the space while taking long exposure photographs. On the TV, a set of contour lines are displayed and captured by the camera. As the camera motion and the displayed contour lines are tightly synchronized, the light contours appear to be stacked in an aligned manner in the long exposures, constituting a number of 3D geometric shapes conceived by Leonardo da Vinci. This asks us to rethink how "perspective" is created in a 2D representation (such as a painting or a photograph), and directly responds to Leonardo's study of perspective. Each resultant long exposure photo is displayed on the projection wall. All light contours are real-time rendered and all long exposures are real-time captured on site. This work has developed a technological innovation for new media art embodiment and facilitated a new artistic experience for the audience.
Impression Machine has been shown in "Leonardo da Vinci: Art & Science. Then & Now” Exhibition" during September to December 2019 in CityU Indra and Harry Banga Gallery. 12 contemporary artworks, including Impression Machine, alongside 12 original drawings of Leonardo da Vinci are displayed in the exhibition, celebrating the 500th anniversary of his death. The exhibition has attracted over 11,000 visitors.
The project was partially supported by ACIM Research Fellowship and HKADC Project Grant.
Website: https://www.cityu.edu.hk/bg/exhibitions/leonardo-da-vinci
ThermAirGlove (Kening Zhu) is a pneumatic glove that provides thermal feedback to support the haptic experience of grabbing objects of different temperatures and materials in VR. The system consists of a glove with five inflatable airbags on the fingers and the palm, two temperature chambers (one hot and one cold), and the closed-loop pneumatic thermal control system. User studies on VR experience showed that using TAGlove in immersive VR could significantly improve users’ experience of presence compared to the situations without any temperature or material simulation.
Eyes-free Smartwatches (Kening Zhu) expands the interaction space of touch-screen devices (e.g., smartphones and smartwatches) via bezel gestures. While existing works have focused on bezel-initiated swipe on square screens, we investigate the usability of BIS on round smartwatches via six different circular bezel layouts. We evaluated the user performance of BIS on these layouts in an eyes-free situation and found that the performance of BIS is highly orientation dependent, and varies significantly among users. We then compare the performance of personal and general ML models, and find that personal models significantly improve the accuracy for a range of layouts. Lastly, we discuss the potential applications enabled by the BIS.
TipText (Kening Zhu) investigates new text entry techniques using micro thumb-tip gestures, specifically using a miniature QWERTY keyboard residing invisibly on the first segment of the index finger. Text entry can be carried out using the thumb-tip to tap the tip of the index finger. The keyboard layout is optimized for eyes-free input by utilizing a spatial model reflecting the users' natural spatial awareness of key locations on the index finger. Our user evaluation showed that participants achieved an average text entry speed of 11.9 WPM and were able to type as fast as 13.3 WPM towards the end of the experiment. Winner: Best Paper Award, ACM UIST 2019
PI: Kening Zhu
Chen, Taizhou, Lantian Xu, Xianshan Xu, and Kening Zhu (*). "GestOnHMD: Enabling Gesture-based Interaction on the Surface of Low-cost VR Head-Mounted Display". IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 5, pp. 2597-2607, May 2021, doi: 10.1109/TVCG.2021.3067689.
Low-cost virtual-reality (VR) head-mounted displays (HMDs) with the integration of smartphones have brought the immersive VR to the masses, and increased the ubiquity of VR. However, these systems are often limited by their poor interactivity. In this paper, we present GestOnHMD, a gesture-based interaction technique and a gesture-classification pipeline that leverages the stereo microphones in a commodity smartphone to detect the tapping and the scratching gestures on the front, the left, and the right surfaces on a mobile VR headset. Taking the Google Cardboard as our focused headset, we first conducted a gesture-elicitation study to generate 150 user-defined gestures with 50 on each surface. We then selected 15, 9, and 9 gestures for the front, the left, and the right surfaces respectively based on user preferences and signal detectability. We constructed a data set containing the acoustic signals of 18 users performing these on-surface gestures, and trained the deep-learning classification models for gesture detection and recognition. The three-step pipeline of GestOnHMD achieved an overall accuracy of 98.2% for gesture detection, 98.2% for surface recognition, and 97.7% for gesture recognition. Lastly, with the real-time demonstration of GestOnHMD, we conducted a series of online participatory-design sessions to collect a set of user-defined gesture-referent mappings that could potentially benefit from GestOnHMD.
For more information about the project, please visit https://meilab-hk.github.io/projectpages/gestonhmd.html
PI: Miu Ling Lam
AIFNet is a deep neural network for removing spatially-varying defocus blur from a single defocused image. We leverage light field synthetic aperture and refocusing techniques to generate a large set of realistic defocused and all-in-focus image pairs depicting a variety of natural scenes for network training. AIFNet consists of three modules: defocus map estimation, deblurring and domain adaptation. The effects and performance of various network components are extensively evaluated. We also compare our method with existing solutions using several publicly available datasets. Quantitative and qualitative evaluations demonstrate that AIFNet shows the state-of-the-art performance.
For more information about the project, please visit:
https://sweb.cityu.edu.hk/miullam/AIFNET/
Environment and Conservation Fund for “Embodied Weather: Promoting Public Understanding of Extreme Weather Through Immersive Multi-Sensory Virtual Reality”, 2018/12/01 – 2021/05/31, , PI: Kening Zhu
The Hong Kong Jockey Club Charities Trust Project Fund for “Creative Assistive Technologies for the Elderly and Disabled through Virtual Reality Capabilities”, 01-Jan-2017 to 31-Jan-2020, PI: Miu Ling Lam
Hong Kong General Research Fund (GRF) for “Autostereoscopic Display using Mirror Array and Single Projector”, 01-Sep-2016 to 29-Feb-2020, PI: Miu Ling Lam
National Natural Science Foundation of China (NSFC - Young Scientists Fund) for “Audio-based and Haptic-based Multimodal Interactive Tangible Programming Environment for Blind Children”, 2020-01-01 to 2022-12-31, PI: Kening Zhu
Hong Kong General Research Fund (GRF) for “Data-driven Structure-adaptive Editing of Man-made Objects”, 01-Jan-2017 to 30-June-2020, PI: Hongbo Fu
Hong Kong Arts Development Council Project Grant for “Inverted Space”, 01-Mar-2017 to 30-Nov-2019, PI: Miu Ling Lam
General Research Fund (GRF) - Early Career Scheme (ECS), for “FuseFab: A Stereolithography-based 3D Printing Technique Leveraging Daily Objects as Molds in Personal Digital Fabrication”, 2017-01-01 to 2019-12-31, PI: Kening Zhu
Hong Kong General Research Fund (GRF) for “Towards Bridging the Gap between Freehand Sketches and 3D Models”, 01-Nov-2019 to 31-Oct-2022, PI: Hongbo Fu
Hong Kong General Research Fund (GRF) for “Support-driven Shape Analysis”, 01-Oct-2015 to 31-March-2019, PI: Hongbo Fu
Bao, Bin, and Hongbo Fu. "Scribble-based Colorization for Creating Smooth-shaded Vector Graphics." Computers & Graphics 81 (2019): 73-81. doi:10.1016/j.cag.2019.04.003.
National Natural Science Foundation of China (NSFC - Young Scientists Fund) for “Volumetric Display and Natural User Interface Based on 3D Fog Screen”, 01-Jan-2016 to 31-Dec-2018, PI: Miu Ling Lam
Innovation and Technology Fund (ITF - ITSP Tier 3) for “Natural User Interface Based on 3D Fog Display”, 01-Dec-2015 to 28-02-2018, PI: Miu Ling Lam
ProGesAR: Mobile AR Prototyping for Proxemic and Gestural Interactions with Real-world IoT Enhanced Spaces
https://sweb.cityu.edu.hk/hongbofu/doc/ProGesAR_CHI_2022.pdf
Hui Ye and Hongbo Fu. 2022. ProGesAR: Mobile AR Prototyping for Proxemic and Gestural Interactions with Real-world IoT Enhanced Spaces. In CHI Conference on Human Factors in Computing Systems (CHI ’22), April 29- May 5, 2022, New Orleans, LA, USA. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3491102.3517689
Ye, Hui, Kin Chung Kwan, Wanchao Su, and Hongbo Fu. "ARAnimator: In-situ Character Animation in Mobile AR with User-defined Motion Gestures." ACM Transactions on Graphics 39, no. 4 (2020). doi:10.1145/3386569.3392404.
Chen, Bin, Lingyan Ruan, and Miu-Ling Lam. "LFGAN: 4D Light Field Synthesis from a Single RGB Image." ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 16, no. 1 (2020): 1-20. doi:10.1145/3366371.
Cai, Shaoyu, Yuki Ban, Takuji Narumi, and Kening Zhu. "FrictGAN: Frictional Signal Generation from Fabric Texture Images using Generative Adversarial Network." ICAT-EGVE2020: International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments. The Eurographics Association, 2020. Best Paper Audience Choice Award.
Wong, Pui Chung, Kening Zhu, Xing-Dong Yang, and Hongbo Fu. "Exploring Eyes-free Bezel-initiated Swipe on Round Smartwatches." In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-11. 2020.
Cai, Shaoyu, Pingchuan Ke, Takuji Narumi, and Kening Zhu. "ThermAirGlove: A Pneumatic Glove for Thermal Perception and Material Identification in Virtual Reality." In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 248-257. IEEE, 2020.
Chen, Shu-Yu, Wanchao Su, Lin Gao, Shihong Xia, and Hongbo Fu. "DeepFaceDrawing: Deep Generation of Face Images from Sketches." ACM Transactions on Graphics 39, no. 4 (2020). doi:10.1145/3386569.3392386.
Li, Lei, Changqing Zou, Youyi Zheng, Qingkun Su, Hongbo Fu, and Chiew-Lan Tai. "Sketch-R2CNN: An RNN-Rasterization-CNN Architecture for Vector Sketch Recognition." IEEE Transactions on Visualization and Computer Graphics (2020). doi:10.1109/tvcg.2020.2987626.
Shen, Yuefan, Changgeng Zhang, Hongbo Fu, Kun Zhou, and Youyi Zheng. "DeepSketchHair: Deep Sketch-based 3D Hair Modeling." IEEE Transactions on Visualization and Computer Graphics (2020). doi:10.1109/tvcg.2020.2968433.
Shen, Yuefan, Changgeng Zhang, Hongbo Fu, Kun Zhou, and Youyi Zheng. "DeepSketchHair: Deep Sketch-based 3D Hair Modeling." IEEE Transactions on Visualization and Computer Graphics (2020). doi:10.1109/tvcg.2020.2968433.
Chen, Bin, Lingyan Ruan, and Miu-Ling Lam. "Light Field Display with Ellipsoidal Mirror Array and Single Projector." Optics Express 27, no. 15 (2019): 21999-22016. doi:10.1364/OE.27.021999.
Xu, Zheer, Pui Chung Wong, Jun Gong, Te-Yen Wu, Aditya Shekhar Nittala, Xiaojun Bi, Jürgen Steimle, Hongbo Fu, Kening Zhu, and Xing-Dong Yang. "TipText: Eyes-Free Text Entry on a Fingertip Keyboard." In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, pp. 883-899. 2019. Best Paper Award.
Zhu, Kening, Simon Perrault, Taizhou Chen, Shaoyu Cai, and Roshan Lalintha Peiris. "A sense of ice and fire: Exploring thermal feedback with multiple thermoelectric-cooling elements on a smart ring." International Journal of Human-Computer Studies 130 (2019): 234-247.
Zhu, Kening, Taizhou Chen, Feng Han, and Yi-Shiun Wu. "HapTwist: creating interactive haptic proxies in virtual reality using low-cost twistable artefacts." In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-13. 2019.
Zou, Changqing, Haoran Mo, Chengying Gao, Ruofei Du, and Hongbo Fu. "Language-based Colorization of Scene Sketches." ACM Transactions on Graphics 38, no. 6 (2019): 1-16. doi:10.1145/3355089.3356561.
Xu, Pengfei, Guohang Yan, Hongbo Fu, Takeo Igarashi, Chiew-Lan Tai, and Hui Huang. "Global Beautification of 2D and 3D Layouts with Interactive Ambiguity Resolution." IEEE Transactions on Visualization and Computer Graphics (2019). doi:10.1109/tvcg.2019.2954321.
Xu, Pengfei, Guohang Yan, Hongbo Fu, Takeo Igarashi, Chiew-Lan Tai, and Hui Huang. "Global Beautification of 2D and 3D Layouts with Interactive Ambiguity Resolution." IEEE Transactions on Visualization and Computer Graphics (2019). doi:10.1109/tvcg.2019.2954321.
Gao, Lin, Jie Yang, Tong Wu, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai, and Hao Zhang. "Sdm-Net." ACM Transactions on Graphics 38, no. 6 (2019): 1-15. doi:10.1145/3355089.3356488.
Xu, Pengfei, Hongbo Fu, Youyi Zheng, Karan Sing, Hui Huang, and Chiew-Lan Tai. "Model-guided 3D Sketching". IEEE Transactions on Visualization and Computer Graphics 25, no. 10 (2019): 2927-2939. doi:10.1109/TVCG.2018.2860016
Yuan, Ming-Ze, Lin Gao, Hongbo Fu, and Shihong Xia. "Temporal Upsampling of Depth Maps Using a Hybrid Camera." IEEE Transactions on Visualization and Computer Graphics 25, no. 3 (2019): 1591-1602. doi:10.1109/tvcg.2018.2812879.
Li, Lei, Hongbo Fu, and Chiew-Lan Tai. "Fast Sketch Segmentation and Labeling With Deep Learning." IEEE Computer Graphics and Applications 39, no. 2 (2019): 38-51. doi:10.1109/mcg.2018.2884192.
Wong, Pui Chung, Kening Zhu, and Hongbo Fu. "Fingert9: Leveraging thumb-to-finger interaction for same-side-hand text entry on smartwatches." In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-10. 2018.
Zhu, Kening, Morten Fjeld, and Ayça Ünlüer. "WristOrigami: Exploring foldable design for multi-display smartwatch." In Proceedings of the 2018 Designing Interactive Systems Conference, pp. 1207-1218. 2018.
Su, Wanchao, Dong Du, Xin Yang, Shizhe Zhou, and Hongbo Fu. "Interactive Sketch-Based Normal Map Generation with Deep Neural Networks." Proceedings of the ACM on Computer Graphics and Interactive Techniques 1, no. 1 (2018): 1-17. doi:10.1145/3203186.
Zhu, Kening, Xiaojuan Ma, Haoyuan Chen, and Miaoyin Liang. "Tripartite effects: exploring users’ mental model of mobile gestures under the influence of operation, handheld posture, and interaction space." International Journal of Human–Computer Interaction 33, no. 6 (2017): 443-459.
Fu, Qiang, Xiaowu Chen, Xiaotian Wang, Sijia Wen, Bin Zhou, and Hongbo Fu. "Adaptive Synthesis of Indoor Scenes via Activity-associated Object Relation Graphs." ACM Transactions on Graphics 36, no. 6 (2017): 1-13. doi:10.1145/3130800.3130805.
Zhu, Kening, Xiaojuan Ma, Haoyuan Chen, and Miaoyin Liang. "Tripartite effects: exploring users’ mental model of mobile gestures under the influence of operation, handheld posture, and interaction space." International Journal of Human–Computer Interaction 33, no. 6 (2017): 443-459. doi:10.1080/10447318.2016.1275432.
Li, Wing Ho Andy, Kening Zhu, and Hongbo Fu. "Exploring the Design Space of Bezel-Initiated Gestures for Mobile Interaction." International Journal of Mobile Human Computer Interaction (IJMHCI) 9, no. 1 (2017): 16-29. doi:10.4018/ijmhci.2017010102
Art Machines: Past/Present, Miu Ling Lam and Kaho Albert Yu, Dis/integration, Robotic installation, Indra and Harry Banga Gallery, City University of Hong Kong, Hong Kong, Nov 2020 – Feb 2021.
Ars Electronica Festival 2020, Hong Kong Garden: Art in Labs, Miu Ling Lam, Impression Machine, Online exhibition, Sep 2020.
Leonardo Da Vinci: Art & Science - Then & Now, Miu Ling Lam, Impression Machine, Robotic installation, Indra and Harry Banga Gallery, City University of Hong Kong, Hong Kong Sep 2019 – Dec 2019.
On the Road Nomination Exhibition of Chinese Young Artists Works & Forum of Young Art Critics, Miu Ling Lam, Guan Shanyue Art Museum, Shenzhen, China, Dec 2018.
On the Road: Young Media Artists in China, Miu Ling Lam, Run Run Shaw Creative Media Centre, City University of Hong Kong, Hong Kong, Mar – Apr 2018.
Minding the Digital, Miu Ling Lam, Design Society (Founding partner: Victoria and Albert Museum V&A London), Shenzhen, China, Dec 2017 – Jun 2018.
Ye, Hui, Kin Chung Kwan, Wanchao Su, and Hongbo Fu. "ARAnimator: In-situ Character Animation in Mobile AR with User-defined Motion Gestures." ACM Transactions on Graphics 39, no. 4 (2020). doi:10.1145/3386569.3392404.
Chen, Shu-Yu, Wanchao Su, Lin Gao, Shihong Xia, and Hongbo Fu. "DeepFaceDrawing: Deep Generation of Face Images from Sketches." ACM Transactions on Graphics 39, no. 4 (2020). doi:10.1145/3386569.3392386.
Li, Lei, Changqing Zou, Youyi Zheng, Qingkun Su, Hongbo Fu, and Chiew-Lan Tai. "Sketch-R2CNN: An RNN-Rasterization-CNN Architecture for Vector Sketch Recognition." IEEE Transactions on Visualization and Computer Graphics (2020). doi:10.1109/tvcg.2020.2987626.
Chen, Bin, Lingyan Ruan, and Miu-Ling Lam. "LFGAN: 4D Light Field Synthesis from a Single RGB Image." ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 16, no. 1 (2020): 1-20. doi:10.1145/3366371.
Hongbo Fu, Lin Cao and Wanchao Su, “基於草圖的人臉圖像生成方法及系統”, China Patent Application No. 202010439641.2, filed 22 May 2020. (Patent pending)
Miu-Ling Lam, Dissolving the Border: Dialogue, Collaboration and Integration between Art and Science, Hong Kong Science Museum, Hong Kong, 1 November 2020
Hongbo Fu, “3D Sketching and Animation in Mobile AR”. China National Computer Congress. Oct. 2020.
Hongbo Fu, “Data-driven sketch interpretation”. Tsinghua University, China. Jan. 2020.
Xu, Pengfei, Guohang Yan, Hongbo Fu, Takeo Igarashi, Chiew-Lan Tai, and Hui Huang. "Global Beautification of 2D and 3D Layouts with Interactive Ambiguity Resolution." IEEE Transactions on Visualization and Computer Graphics (2019). doi:10.1109/tvcg.2019.2954321.
Zou, Changqing, Haoran Mo, Chengying Gao, Ruofei Du, and Hongbo Fu. "Language-based Colorization of Scene Sketches." ACM Transactions on Graphics 38, no. 6 (2019): 1-16. doi:10.1145/3355089.3356561.
Gao, Lin, Jie Yang, Tong Wu, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai, and Hao Zhang. "Sdm-Net." ACM Transactions on Graphics 38, no. 6 (2019): 1-15. doi:10.1145/3355089.3356488.
Xu, Pengfei, Hongbo Fu, Youyi Zheng, Karan Sing, Hui Huang, and Chiew-Lan Tai. "Model-guided 3D Sketching". IEEE Transactions on Visualization and Computer Graphics 25, no. 10 (2019): 2927-2939. doi:10.1109/TVCG.2018.2860016
Bao, Bin, and Hongbo Fu. "Scribble-based Colorization for Creating Smooth-shaded Vector Graphics." Computers & Graphics 81 (2019): 73-81. doi:10.1016/j.cag.2019.04.003.
Yuan, Ming-Ze, Lin Gao, Hongbo Fu, and Shihong Xia. "Temporal Upsampling of Depth Maps Using a Hybrid Camera." IEEE Transactions on Visualization and Computer Graphics 25, no. 3 (2019): 1591-1602. doi:10.1109/tvcg.2018.2812879.
Li, Lei, Hongbo Fu, and Chiew-Lan Tai. "Fast Sketch Segmentation and Labeling With Deep Learning." IEEE Computer Graphics and Applications 39, no. 2 (2019): 38-51. doi:10.1109/mcg.2018.2884192.
Chen, Bin, Lingyan Ruan, and Miu-Ling Lam. "Light Field Display with Ellipsoidal Mirror Array and Single Projector." Optics Express 27, no. 15 (2019): 21999-22016. doi:10.1364/OE.27.021999.
Zhu, Kening, Simon Perrault, Taizhou Chen, Shaoyu Cai, and Roshan Lalintha Peiris. "A sense of ice and fire: Exploring thermal feedback with multiple thermoelectric-cooling elements on a smart ring." International Journal of Human-Computer Studies 130 (2019): 234-247. doi:10.1016/j.ijhcs.2019.07.003.
Miu Ling Lam, Bin Chen and Yaozhun Huang. "Apparatus for Generating Moveable Screen Across a Three-Dimensional Space." US Patent No. US10297031B2, filed December 8, 2015, published June 8, 2017, granted May 21, 2019.
Miu Ling Lam, Yaozhun Huang, Sze Chun Tsang and Bin Chen. "Electronic System for Creating an Image and a Method of Creating an Image." U.S. Patent No. US10432907B2, filed July 22, 2016, granted October 1, 2019.
Miu Ling Lam, Bin Chen and Lingyan Ruan. "Autostereoscopic Multi-view Display System and Related Apparatus." U.S. Patent Application No. 15/927,312, filed March 21, 2018, published September 26, 2019. (Patent pending)
Kening Zhu, Feng Han, Yi-Shiun Wu and Taizhou Chen, “Systems and Methods for Creating Haptic Proxies for Use in Virtual Reality.” U.S. Patent Application No. 16/392,142, filed April 23, 2019. (Patent pending)
Hongbo Fu and Kin Chung Kwan, “Mobi3DSketch : 3D Sketching in Mobile AR.” U.S. Patent Application No. 16/510,561, filed July 12, 2019. (Patent pending)
Kening Zhu, Leveraging The Sense of Touch in HCI, Art&Technology Seminar, 13th International Academic Symposium: Evolution of Technology and Future of the Creative Contents Industry for the Fourth Industrial Revolution, Research Institute of Korean Dance, Hanyang University, Seoul, Korea, 02 November, 2019.
Kening Zhu, Leveraging The Sense of Touch in HCI, Imagination Seminar, Sogang University, Seoul, Korea, 01 November, 2019.
Kening Zhu, Weaving the Threads of Traditional Culture and Human-Computer Interfaces: Interactive Technologies Inspired by The Art of Paper-craft, Chalmers University of Technology, Gothenburg, Sweden, 10 May 2019.
Hongbo Fu, “Data-driven sketch interpretation”. Advanced Lectures on Image and Graphics, Shenzhen University, China. Dec. 2019.
Hongbo Fu, “Data-driven sketch interpretation”. Summer School, Shandong University, China. July 2019.
Su, Wanchao, Dong Du, Xin Yang, Shizhe Zhou, and Hongbo Fu. "Interactive Sketch-Based Normal Map Generation with Deep Neural Networks." Proceedings of the ACM on Computer Graphics and Interactive Techniques 1, no. 1 (2018): 1-17. doi:10.1145/3203186.
Miu-Ling Lam, Future Interface, Design Society, Shenzhen, China, 11 March 2018
Kening Zhu, Bridging Interactive Technologies and The Art of Paper-craft: Interactive Technologies Inspired by The Art of Paper-craft, "Art Innovation" International Symposium, Kyoto University, 16 March 2018.
Kening Zhu, The Singularity: Virtual Reality and Artificial Intelligence, Art Basel Hong Kong, 31 March 2018.
Hongbo Fu, “Data-driven sketch interpretation”. 7th International Consortium of Chinese Mathematicians in Computational and Applied Mathematics (ICCM-CAM). Dec. 2018.
Hongbo Fu, “Data-driven sketch interpretation”. Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong. Dec. 2018.
Fu, Qiang, Xiaowu Chen, Xiaotian Wang, Sijia Wen, Bin Zhou, and Hongbo Fu. "Adaptive Synthesis of Indoor Scenes via Activity-associated Object Relation Graphs." ACM Transactions on Graphics 36, no. 6 (2017): 1-13. doi:10.1145/3130800.3130805.
Zhu, Kening, Xiaojuan Ma, Haoyuan Chen, and Miaoyin Liang. "Tripartite effects: exploring users’ mental model of mobile gestures under the influence of operation, handheld posture, and interaction space." International Journal of Human–Computer Interaction 33, no. 6 (2017): 443-459. doi:10.1080/10447318.2016.1275432.
Li, Wing Ho Andy, Kening Zhu, and Hongbo Fu. "Exploring the Design Space of Bezel-Initiated Gestures for Mobile Interaction." International Journal of Mobile Human Computer Interaction (IJMHCI) 9, no. 1 (2017): 16-29. doi:10.4018/ijmhci.2017010102
People