Our project aims at developing a wearable device platform that significantly diverges from traditional screen-based interactions. By leveraging voice and gesture commands, the project seeks to create a more natural and intuitive user experience that allows individuals to remain engaged with their physical surroundings while accessing and interacting with digital information. This approach promises to facilitate a more seamless, just-in-time, and intelligent computing support system that can enhance productivity, safety, and overall quality of life without the cognitive and social drawbacks associated with current mobile technologies.

Key areas of focus for the project include:

  • Hardware Design: The development of lightweight, comfortable, and aesthetically pleasing wearable devices capable of supporting advanced computing functionalities. This involves integrating sensors and input/output mechanisms that are conducive to voice and gesture-based interactions, ensuring that the technology is both accessible and practical for everyday use.

  • Interaction Models: The creation of innovative interaction paradigms that can accurately interpret and respond to natural human behaviors. This includes refining voice recognition and gesture detection algorithms to understand a wide range of commands and gestures, thereby allowing for a more fluid and flexible user experience. We also investigate the use of multiple modes of interaction to provide a more robust and adaptable computing experience. By offering a variety of input and output options, the system can cater to diverse user preferences and situational requirements, ensuring that the technology remains effective across different contexts and use cases.

  • Integration Methodologies: Exploring strategies to seamlessly blend computing into the user's sensory experience. This entails designing context-aware systems that can intelligently adjust to the user's environment and activities, providing relevant information and support without unnecessary interruptions or distractions. The project aims to achieve a balance between digital connectivity and real-world presence, enabling users to access digital resources in a manner that feels like an extension of their natural senses.

The ultimate goal of the "Heads-Up Computing" project is to pioneer a new era of mobile computing that enhances human capabilities and interactions without the downsides of screen dependency. Through innovative hardware design, intuitive interaction models, and intelligent integration methodologies, the project seeks to create a future where technology serves to augment human experiences in a way that is both empowering and harmonious with the physical world.

GRF #11605622, 2022/23

PI :: PerMagnus Lindborg (City University of Hong Kong)

Co-Is :: Francesco Aletta (University College London, UK), Kongmeng Liew (Nara Institute of Science and Technology, Japan), Yudai Matsuda (CityU, HK), Jieling Xiao (City University of Birmingham, UK).

Project website :: http://soundlab.scm.cityu.edu.hk/mmhk/

 

Abstract

The sensory cultural heritage, combining tangible and intangible heritage, creates identity and cohesion in a community. In urban research, analysis of everyday-ish and informal customs typically rely on visual images, texts, and archival materials, to describe the multifarious aspects of culturally significant places and practices. By contrast, the acoustic environment is often not part of the narrative, and very rarely is the olfactory environment recorded. Given the contemporary context of rapid and profound transformation in Hong Kong, essential threads of the city fabric risk being neglected, and might even disappear before they can be documented. Can we really claim to know urban places without thoroughly considering, and documenting, the sensory cultural heritage represented by sounds and smells?

Our project seeks to preserve the threatened environment of some of Hong Kong’s signature sites and create a more accurate and richer understanding of culturally important places, rituals, and social practices, allowing greater appreciation of the heritage. In the meantime, this project aims to shed light on the crossmodal relationships between urban landscape, soundscape, and smellscape.

We propose a multimodal research approach that takes sound and smell as core components of the immersive urban experience. The project will document a large sample of characteristic sites in Hong Kong, focusing on places for Street food (街頭小食), Chinese Temples (寺廟 [佛祖, 天后…]), and Wet markets (傳統市場). The database will be open-access via a dedicated project website, connecting with recently initiated international soundscape–smellscape projects. It will contribute to the current need for detailed documentation of the local cultural heritage; support interdisciplinary collaborations; be a significant resource for future longitudinal studies of urbanism in Hong Kong; and a reference point for cross-cultural studies with other cities.

Methodology-wise, we will develop a capacity to systematically collect and analyse data from complex physical environments, integrating sonic and olfactory measurements with video capture and narratives. Field data will be both objective and subjective, to include 360 ̊ video, 3D audio (Ambisonics), and ‘smellprints’ (gas chromatography-mass spectrometry analysis of air samples), as well as sensory walks with observers making structured annotations of the perceived visual, auditive, and olfactory environment, and interviews with stakeholders. The database generated in the project will serve further research in environmental psychology, multimodal perception, and sensory integration. It will also prepare the ground for future multisensorial applications in virtual tourism, art, games, film, and spatial design at museums, galleries, and commercial venues.

Design Strategies for Concurrent Sonification-Visualisation of Geodata

  • PI :: PerMagnus Lindborg, PhD, Associate Professor, School of Creative Media, City University of Hong Kong
  • Co-I :: Sara Lenzi, PhD, Ikerbasque Research Fellow, University of Deusto, Spain
  • Co-I :: Paolo Ciuccarelli, Professor of Design, Northeastern University, Boston, USA

 

About

Sonification is the translation of data into sound. Inherently interdisciplinary, the field has seen tremendous development characterised by 1) expanding the definition to embrace aesthetics, via electroacoustic music composition; 2) professionalisation of terminology, techniques, and community-building; and 3) increased attention to visualisation. Time is ripe to focus efforts on the third point. We employ knowledge from dynamic data visualisation to improve on sonification techniques, to generate a cross-modal perception informed theoretical framework, and to determine practicable strategies for concurrent sonification-visualisation design. Project targets are: 1) a set of design guidelines, and 2) a proof-of-concept software system applied to geodata with real-life importance, such as rain and wind, pollution and traffic, forest fires and landslides. People seek to understand their physical environment. Accurate and engaging information design helps both in everyday activities and in making life-choices. Laying the research groundwork for a concurrent sonification-visualisation system for communicating environmental geodata has the potential for real-life applications with broad public appeal and societal impact. Ultimately, the goal of the project is to contribute to the digital fabric of society and improve people’s quality of life.

 

Funding

Stragegic Research Grant (SRG-Fd), City University of Hong Kong (2023/09–2025/02)

Figure 1

 

 

Links

 

Figure 2

 

 

Publications by the team Lindborg

  • PM, Caiola V, Chen M, Ciuccarelli P, Lenzi S (2023/09, in review). “A Meta-Analysis of Project Classifications in the Data Sonification Archive ”. J Audio Engineering Society.
  • Lenzi S, Lindborg PM, Han NZ, Spagnol S, Kamphuis D, Özcan E (2023/09). “Disturbed Sleep: Estimating Night-time Sound Annoyance at a Hospital Ward”. Proc European Acoustics Association.
  • Lindborg, PM, Lenzi S & Chen M (2023/01). “Climate Data Sonification and Visualisation: An Analysis of Aesthetics, Characteristics, and Topics in 32 Recent Projects”. Frontiers Psych 13.
  • Lenzi S, Sádaba J, and Lindborg PM (2021). “Soundscape in Times of Change: Case Study of a City Neighbourhood During the COVID-19 Lockdown.” Frontiers Psych 12:412.
  • Lenzi S & Ciuccarelli P (2020). “Intentionality and design in the data sonification of social issues.” Big Data and Society.

From the 1970s through the 1990s, the Korean film market, like the markets of many countries around the world, was dominated by Hollywood. The majority of film critics, students, and industry professionals viewed the future of South Korean cinema as bleak. Surprisingly, in 2001, South Korea became the first film industry in recent history to reclaim its domestic market from Hollywood. Since then, South Korean cinema made a history. Indeed, South Korean cinema provides one of the most striking case studies of non-Western cinematic success in the age of the neoliberal world order and Hollywood’s domination in the global film market. What happened to the South Korean film culture between 1992 and 2003? How did what was once an “invisible” cinema become one of the world’s most influential film industries so quickly? And what implications does the South Korean film renaissance have for the ways we approach national and transnational cinema more broadly? 

ACR Lab’s project “The South Korean Film and Media Industry” will host an international symposium on the subject and also publish a series of books, journal articles, and special issues. The South Korean Film Industry, the project’s first outcome, is the first detailed scholarly overview of the South Korean film industry. This edited volume maps out a compelling and authoritative vision of how that field may be approached in historical and industrial terms. 

 

Forthcoming Publications

The South Korean Film Industry
Edited by Sangjoon Lee (lead editor), Dal Yong Jin, and Junhyoung Cho. University of Michigan Press (August 2024).
https://press.umich.edu/Books/T/The-South-Korean-Film-Industry2

Published Special Issue

“Is Netflix Riding the Korean Wave or Vice Versa?”
International Journal of Communication 17:1 (November 2023). 
Edited by Dal Yong Jin, Sangjoon Lee, and Seok-Kyeong Hong
https://ijoc.org/index.php/ijoc/article/view/20718

Ongoing Project

Netflix and the South Korean Media
Edited by Sangjoon Lee (lead editor), Dal Yong Jin, and Seok-Kyeong Hong
Brill (expected in 2025).