BIG TALK: Robin Brewer, University of Michigan

Topic: Towards accessibility and inclusion for all
When: April 24, 2025 14:00–15:00 BST, online (register on Eventbrite)

This talk on Human-Computer Interaction is at the intersection of social computing and accessibility. I ask how we can best represent disability and older age in systems. I design, build, and study systems to better support technology use (and non-use) by older adults and people with vision impairments. Most recently, I have investigated the role of accessible voice communities through voice assistants and Interactive Voice Response tools and how these technologies support or do not support social and informational needs. My research team of postdocs and students are also working on projects related to obfuscation and AI visual access, care platforms and labor, memory and digitization, older age and digital harms, accessible large language models, and mitigating information uncertainty in conversational technologies.

Bio: I am an Assistant Professor at University of Michigan’s School of Information (UMSI). In 2021, I co-founded the Accessibility, HCI, and Aging (AHA!) lab across the School of Information, Department of Computer Science and Engineering, and STAMPS School of Art & Design. I also hold affiliate positions in the Center for Ethics, Society, and Computing (ESC, pronounced “escape”), the Digital Studies Institute (DSI), the Michigan Center on the Demography of Aging (MiCDA), and the Institute for Healthcare Policy and Innovation (IHPI). My research has been funded by the NSF, Google, NIH, and the Retirement Research Foundation. I have also worked at Google as a visiting researcher and Microsoft Research, Facebook, and IBM Research as a user experience researcher. I was awarded a President’s Postdoctoral Fellowship at the University of Michigan and hold a Ph.D. in Technology and Social Behavior from Northwestern University, M.S. in Human-Centered Computing from UMBC, and B.S. in Computer Science from the University of Maryland.

All welcome! Sign up on Eventbrite.

Applications are open for our online Masters in HCI – learn more and register for info sessions here.

BIG TALK: Ana Tajadura-Jiménez, Universidad Carlos III de Madrid

Topic: Sensing the body through sound
When: February 6 2025, 2-3pm GMT online (register on Eventbrite)

Music makes us dance and move, but can sounds do more for our body? We may easily think that hearing is the least relevant modality for our sense of bodily self, compared, for instance, to touch, vision and interoception. Yet audition provides rich information about what is happening inside and crucially outside of our bodies: we hear ourselves breathing, or our joints crack; we hear our hands clapping against each other or stroking a piece of velvet; we hear the sounds of our footsteps mixing with those of others as we go down the stairs. Rarely is there an action or event that we are involved in which is silent, and yet audition remains relatively ignored as a contributor to our sense of self. This talk aims to correct this oversight, by highlighting the surprising but also special contributions that audition brings to our sense of self.

Bio: Ana Tajadura-Jiménez is an Associate Professor at UC3M and Honorary Research associate at the University College London Interaction Centre (UCLIC). She leads the i_mBODY lab, in which research focuses on understanding how sensory-based interaction technologies could be used to alter people’s perceptions of their own body, their emotional state and their motor behaviour patterns. Her research is empirical and multidisciplinary, combining perspectives of psychoacoustics, neuroscience and Human-Computer Interaction (HCI).

She is currently Principal Investigator of the AEI-funded Magic OutFit project, which aims to inform the design of technology to make people feel better about their bodies and sustain healthy lifestyles. She is also Principal Investigator of the project BODYinTRANSIT funded by a Consolidator Grant from the European Research Council. Prior to this she obtained a PhD in Applied Acoustics at Chalmers University of Technology (Sweden). She was a post-doctoral researcher in the Lab of Action and Body at Royal Holloway, University of London, an ESRC Future Research Leader and Principal Investigator of The Hearing Body project at University College London Interaction Centre (UCLIC) and a Ramón y Cajal fellow at Universidad Loyola Andalucía. Her work has led her to receive the 2019 “Excellence Award”  from the UC3M Consejo Social and the 2021 Science and Engineering Award from the Fundación Banco Sabadell. Her current research interests include body perception, embodied cognition, affective interaction, virtual reality and wearable and self-care technologies to support emotional and physical health.

All welcome! Sign up on Eventbrite.

Applications are open for our online Masters in HCI – learn more and register for info sessions here.

BIG TALK: Michael Proulx, Meta Reality Labs

Topic: Eye tracking in extended reality
When: January 9 2025, 2-3pm GMT online (register on Eventbrite)

Bio: What does it mean, to see? To perceive? My primary interest in psychology is cognition at the nexus of technology and neuroscience. My research takes root in examining cognition and attentional control within the visual system, and also examining how multisensory processing contributes to perception and cognition. How we we pay attention to the many features and objects in our environment is a key aspect of cognition, and the visual system and eye tracking provide an excellent system for its study. Advancing eye tracking applications for extended reality (XR) is now my key focus.  I am also currently a Research Scientist at Reality Labs Research (Meta née Oculus), where I am advancing eye tracking and other research for new applications in extended reality (AR/VR/MR).

All welcome! Sign up on Eventbrite.

Applications are open for our online Masters in HCI – learn more and register for info sessions here.

BIG TALK: Kevin Doherty, University College Dublin

Topic: Advancing a human-centred approach to person-centred care for the digital age
When: December 5, 2024 2-3pm GMT, online (register on Eventbrite)

This talk will discuss research that focuses on advancing a human-centred approach to person-centred care for the digital age — through the design, development and evaluation of digital tools to enhance the clinical practice of healthcare, everyday mental health, and digital wellbeing. Current research projects discussed span the development of digital and AI tools to support the online and face-to-face practice of therapy, self-report technologies to inform and facilitate access to mental healthcare, and decision-support systems to augment care for chronic, co-morbid conditions.

Bio: Dr Kevin Doherty is AdAstra Assistant Professor of Human-Computer Interaction (HCI) at University College Dublin’s School of Information and Communication Studies. He holds a PhD in Human-Computer Interaction from Trinity College Dublin, an MAI in Electronic & Computer Systems from the Grenoble Institute of Technology and an MSc in Medical Device Design from the National College of Art and Design Dublin. Dr Doherty is Director of UCD’s MSc in Human-Computer Interaction Programme, a leading member of the cross-disciplinary HCI@UCD research group, and a member of the AI Healthcare Hub at the UCD Institute for Discovery, the ADAPT Centre’s Health Working Group, UCD’s Community of Practice for Public Engagement, the Irish Chapter of ACM SIGCHI, and the Copenhagen Center for Health Technology.

All welcome! Sign up on Eventbrite.

Applications are open for our online Masters in HCI – learn more and register for info sessions here.

BIG TALK: Heloisa Candello, IBM Research Brazil

Topic: A human-centered approach to responsible conversational user interfaces
When: November 28, 2024 2-3pm GMT, online (register on Eventbrite)

Responsible AI has been a popular topic in academic and industry settings with the advent of conversational AI based on generative models in the last two years. Despite the growing scientific research in this field, what should be considered when designing for responsibility in conversational AI interactions is still being investigated. In this talk, I will revisit some of the CUI work our community has been publishing in the last six years and my projects in education and finance to unveil the main considered definitions, criteria, and methodologies for designing responsible AI systems. We will discuss responsible AI in public and private settings and human values identified as essential to design responsible CUIs. Our research projects have investigated how to foster trust, accountability, transparency, fairness, and acceptance of conversational user interfaces deployed in natural settings with diverse audiences. Furthermore, we will discuss how bias emerged as a criterion when interacting with CUIs in museum settings, how we investigated accountability and trust embedded into financial advisors’ chatbots, and our recent work into elucidating values such as creditworthiness with micro businesswomen in underrepresented communities using conversational systems. This talk can serve as the basis for further discussions during the conference on promoting social impact and mitigating harm when designing conversational generative systems with responsibility.

 

Bio: Dr. Heloisa Candello is a Senior Research Scientist at the Responsible Tech group at IBM Research – Brazil, where she works with a team of talented researchers, software engineers and designers on innovative AI solutions. She has been with IBM for over 10 years, applying her expertise in user research and user experience design to create engaging and ethical AI interactive systems, especially conversational interfaces.
With a PhD in Human-Computer Interaction, Dr. Candello is passionate about exploring the design issues and opportunities involved in human-AI collaboration. She has published multiple papers in prestigious conferences and journals and received an honorable mention award at CHI 2019.  Heloisa is currently serving as ACM Distinguished Speaker of ACM Distinguished Speaker Program, and an active volunteer and contributor to the ACM SIGCHI community, where she served as a member of the Volunteer Development Committee for two years and now is a co-chair of the LATAM committee, and CUI Steering committee. Her goal is to advance the field of HCI and AI, and to empower people with the support of responsible AI technologies.

All welcome! Sign up on Eventbrite.

Applications are open for our online Masters in HCI – learn more and register for info sessions here.

BIG TALK: Ding Wang, Google AI

Topic: Whose AI Dream? In search of the aspiration in data annotation
When: November 21 2024, 2-3pm GMT online (register on Eventbrite)

This talk explores the critical importance of annotator perspectives—encompassing their diverse demographics, cultural backgrounds, and lived experiences—in building responsible AI/ML. Challenging the perception of data annotation as simple and standardized, Ding’s research delves into the complexities of annotator viewpoints and work practices, examining how these diverse perspectives impact data quality. Through interviews, ethnography, and mixed methods, this work uncovers a disconnect between acknowledging the importance of diversity and actively incorporating it into dataset production. This is illustrated by examining the annotation of dialogue safety in chatbots, where defining “safety” is inherently subjective and influenced by cultural norms. Moving beyond “gold labels” as absolute truth, this talk proposes alternative methods for interpreting data that embrace annotator disagreement and incorporate qualitative assessments to build more robust and responsible AI models.

Bio: Ding Wang is an HCI researcher at Google Research, currently working with the Technology, AI, Society and Culture team. Her research explores the intersection of HCI and AI, specifically the labor involved in data production and its impact on AI systems. Prior to joining Google, Ding completed her postdoc research at Microsoft Research India, where her projects focused on the future of work and healthcare. She received her PhD from the HighWire Centre for Doctoral Training at Lancaster University. Her doctoral thesis offers a critical/alternative view on how smart cities should be designed, developed, and evaluated.

All welcome! Sign up on Eventbrite.

Applications are open for our online Masters in HCI – learn more and register for info sessions here.

BIG TALK: Alexandra Ion, Carnegie Mellon University

Topic: Embedded Physical AI 
When: November 14, 2024 2-3pm GMT, online (register on Eventbrite)

The world around us is inherently physical, yet adaptive interfaces focus mostly on digital content and representations. Physical AI is moving in the right direction by creating models that can understand instructions and perform physical tasks in the real world, typically with humanoid or quadrupedal robots as physical AI agents.

In this talk, I envision a future where such physical AI agents move into the background—instead of interacting with large general-purpose robots, we should interact with physical objects that are familiar to us. I call this *Embedded Physical AI*. Bottles, desks, chairs, walls; any object surrounding us should dynamically adapt to our needs, by not only adjusting their user interfaces to fit the context but also by transforming their physical features, material properties, and affordances to instantly become exactly what users need in that moment.

I will discuss how creating such intelligent agents requires two main components: (1) adaptive physical architectures to facilitate physical change, e.g., through metamaterials, shape-changing interfaces, soft robotics, etc., and (2) sensing and prediction systems to understand when and what functionality users require. With new approaches in predictive user modeling and innovations in manufacturing and material science, the time might just be ripe to make this challenging vision a reality.

Bio: I am an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University’s School of Computer Science. I direct the Interactive Structures Lab, where we investigate and develop interactive design tools that enable digital fabrication of complex structures for novice users. Interactive structures embed functionality within their geometry such that they can react to simple input with complex behavior. Such structures enable materials that can, e.g., embed robotic movement, can perform computations, or communicate with users.  We develop optimization-based interactive design tools that enable novices to contribute their creativity and experts to apply their intuition in order to foster the advancement of high-tech materials. Before joining CMU, I was a postdoctoral researcher at ETH Zurich and completed my PhD at the Hasso Plattner Institute, a small, highly selective, top-tier institute for computer science in Germany. My work is published at and awarded by top-tier HCI (ACM CHI & UIST) and graphics venues (ACM SIGGRAPH). It was invited for multiple exhibitions, including a permanent exhibition at the Ars Electronica Center in Austria. My work also captured the interest of media such as Wired, Dezeen, Fast Company, Gizmodo, etc., and was invited for a TEDx talk.

All welcome! Sign up on Eventbrite.

Applications are open for our online Masters in HCI – learn more and register for info sessions here.

BIG TALK: Atau Tanaka, University of Bristol

Topic: Sonic Entanglements with Electromyography: Between Bodies, Signals, and Representations
When: October 10, 2024 13:00–14:00 GMT, in person and online (register on Eventbrite)

This talk presents a paper recently published at DIS2024 that looks at sound/music interactions using electromyography (EMG) to instrumentalise muscle exertion of the human body. I situate EMG within a family of embodied interaction modalities, where it occupies a middle ground, considered as a “signal from the inside” compared with external observations of the body (e.g., motion capture), but also seen as more volitional than neurological states recorded by brain electroencephalogram (EEG). To understand the messiness of gestural interaction afforded by EMG, we revisit the phenomenological turn in HCI, counterposing it against the grain of recent posthumanist thought, which offer performative interpretations of entanglements between bodies, signals, and representations. We take music performance as a use case, reporting on the opportunities and constraints posed by EMG in workshop-based studies of vocal, instrumental, and electronic practices.

Bio: Atau Tanaka conducts research in embodied musical interaction. This work takes place at the intersection of human computer interaction and gestural computer music performance. He has a joint position at BIG’s Culture Lab and the Centre for Sound, Technology & Culture at Goldmiths University of London. Atau has previously been Artistic Ambassador at Apple, researcher at Sony Computer Science Laboratory, and professor and guest professor in Japan, Germany, France, and northeast England. His work has been supported by the European Research Council (ERC), Horizon2020, and both science and humanities sections of Research Councils UK (RCUK).

All welcome! Sign up on Eventbrite.

Applications are open for our upcoming online Masters in Human-Computer Interaction – learn more and register for info sessions here.

BIG TALK: Amanda Lazar, University of Maryland

Topic: Centering the tension between critical perspectives and practice to advance HCI research on health and aging
When: August 5, 2024 13:00–14:00 GMT, in person

Technology designers and developers have focused on the domain of health and aging for decades. Recently, researchers are adopting critical perspectives which push back on prior ways of approaching technologies in these spaces. For example, researchers are calling attention to how the overwhelming focus of aging interventions on addressing cognitive and physical decline links to a deficit-view of aging which can contribute to stigma and neglect other needs of older adults. I center these and other tensions between practice-based and critical approaches in my work, arguing that it is important to rigorously attend to and learn from both of these approaches. In this talk, I will present several projects on technology for health and aging. First, I will argue for the importance of understanding the tension between critical and practice-based approaches, and how these can be traced in our research. Then, I will present my work that seeks to leverage these tensions to advance design.

Bio: Amanda Lazar is an assistant professor in the College of Information Studies, with an affiliate appointment in the Department of Computer Science, at the University of Maryland, College Park. She received her PhD from the University of Washington in Biomedical and Health Informatics. Her research in Human-Computer Interaction examines the design of technology for older adults, with a particular focus on older adults with dementia, to support social interaction and engagement in activities. Her work is supported by the National Science Foundation (NSF) and the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR).

BIG TALK: Janet Read, University of Central Lancashire

Topic: Children doing HCI work with us: Fair, Fun and Fruitful Engagement
When: July 22, 2024 13:00–14:00 BST (in person)

Over the last twenty years, Janet has worked with children on design, evaluation, and research studies with a constant wish to ensure their time is well spent, their contributions are recognised and that as many children as possible can be included and empowered in HCI research while having a fabulous time. This talk will bring together empirical studies around the inclusion of children as design partners, methods for the better treatment of children’s ideas, contributions, and practical advice on gaining informed assent with children.