• Home
  • Publications
  • CV
profile picture

Rebecca Adaimi, PhD

"Towards lifelong modeling of human behavior"

  • © Rebecca Adaimi 2025
    2022

    AudioIMU: Enhancing Inertial Sensing-Based Activity Recognition with Acoustic Models

    Modern commercial wearable devices are widely equipped with inertial measurement units (IMU) and microphones. The motion and audio signals captured by these sensors can be used for recognizing a variety of user physical activities. Compared to motion data, audio data contains rich contextual information of human activities, but continuous audio sensing also poses extra data sampling burdens and privacy issues. Given such challenges, this paper studies a novel approach to augment IMU models for human activity recognition (HAR) with the superior acoustic knowledge of activities. Specifically, we propose a teacher-student framework to derive an IMU-based HAR model… Read more

    Conference PaperISWCKnowledge DistillationSound SensingInertial SensingHuman Activity RecognitionAudio Processing

    Leveraging Sound and Wrist Motion to Detect Activities of Daily Living with Commodity Smartwatches

    Automatically recognizing a broad spectrum of human activities is key to realizing many compelling applications in health, personal assistance, human-computer interaction and smart environments. However, in real-world settings, approaches to human action perception have been largely constrained to detecting mobility states, e.g., walking, running, standing. In this work, we explore the use of inertial-acoustic sensing provided by off-the-shelf commodity smartwatches for detecting activities of daily living (ADLs). We conduct a semi-naturalistic study with a diverse set of 15 participants in their own homes and show that acoustic and inertial sensor data can be combined to recognize 23 activities… Read more

    Journal PaperIMWUTMultimodal classificationSound SensingInertial SensingHuman Activity RecognitionAudio ProcessingSmartwatchDatasetActivities of Daily LivingUbiquitous and Mobile Computing

    Automated Detection of Foreground Speech with Wearable Sensing in Everyday Home Environments: A Transfer Learning Approach

    Acoustic sensing has proved effective as a foundation for numerous applications in health and human behavior analysis. In this work, we focus on the problem of detecting in-person social interactions in naturalistic settings from audio captured by a smartwatch. As a first step towards detecting social interactions, it is critical to distinguish the speech of the individual wearing the watch from all other sounds nearby, such as speech from other individuals and ambient sounds. This is very challenging in realistic settings, where interactions take place spontaneously and supervised models cannot be trained apriori to recognize the full complexity of dynamic social environments. In this paper, we introduce a transfer learning-based approach to detect foreground speech of users wearing a smartwatch…. Read more

    ArXiv PaperForeground Speech DetectionAudio ProcessingSpeech DetectionTransfer LearningWearable SensingSocial Interactions

    Lifelong Adaptive Machine Learning for Sensor-based Human Activity Recognition Using Prototypical Networks

    Continual learning, also known as lifelong learning, is an emerging research topic that has been attracting increasing interest in the field of machine learning. With human activity recognition (HAR) playing a key role in enabling numerous real-world applications, an essential step towards the long-term deployment of such recognition systems is to extend the activity model to dynamically adapt to changes in people’s everyday behavior. Current research in continual learning applied to HAR domain is still under-explored with researchers exploring existing methods developed for computer vision in HAR. Moreover, analysis has so far focused on task-incremental or class-incremental learning paradigms where task boundaries are known. This impedes the applicability of such methods for real-world systems since data is presented in a randomly streaming fashion. To push this field forward, we build on recent advances in the area of continual machine learning and design a lifelong adaptive learning framework using Prototypical Networks… Read more

    Journal PaperSensors JournalContinual LearningLifelong LearningIncremental LearningOnline LearningPrototypical NetworksCatastrophic ForgettingHuman Activity Recognition
    2021

    Ok Google, What Am I Doing? Acoustic Activity Recognition Bounded by Conversational Assistant Interactions

    Conversational assistants in the form of stand-alone devices such as Amazon Echo and Google Home have become popular and embraced by millions of people. By serving as a natural interface to services ranging from home automation to media players, conversational assistants help people perform many tasks with ease, such as setting times, playing music and managing to-do lists. While these systems offer useful capabilities, they are largely passive and unaware of the human behavioral context in which they are used. In this work, we explore how off-the-shelf conversational assistants can be enhanced with acoustic-based human activity recognition by leveraging the short interval after a voice command is given to the device… Read more

    Journal PaperIMWUTHuman Activity RecognitionAudio ProcessingConversational Assistants
    2020

    Using Convolutional Variational Autoencoders to Predict Post-Trauma Health Outcomes from Actigraphy Data

    Depression and post-traumatic stress disorder (PTSD) are psychiatric conditions commonly associated with experiencing a traumatic event. Estimating mental health status through non-invasive techniques such as activity-based algorithms can help to identify successful early interventions. In this work, we used locomotor activity captured from 1113 individuals who wore a research grade smartwatch post-trauma. A convolutional variational autoencoder (VAE) architecture was used for unsupervised feature extraction from four weeks of actigraphy data… Read more

    Workshop PaperNeurIPSMental HealthVariational AutoencodersActigraphy Data

    Eating Episode Detection with Jawbone-Mounted Inertial Sensing

    Recent work in Automated Dietary Monitoring (ADM) has shown promising results in eating detection by tracking jawbone movements with a proximity sensor mounted on a necklace. A significant challenge with this approach, however, is that motion artifacts introduced by natural body movements cause the necklace to move freely and the sensor to become misaligned. In this paper, we propose a different but related approach: we developed a small wireless inertial sensing platform and perform eating detection by mounting the sensor directly on the underside of the jawbone… Read more

    Conference PaperEMBCAutomated Dietary MonitoringInertial SensingSmartwatchUbiquitous and Mobile Computing

    Usability of a Hands-free Voice Input Interface for Ecological Momentary Assessment

    Ecological Momentary Assessment (EMA) is a data collection method that consists of asking individuals to answer questions pertaining to their behavior, feelings, and experiences in everyday life. While EMA provides benefits compared to retrospective self-reports, the frequency of prompts throughout the day can be burdensome. Leveraging advances in speech recognition and the popularity of conversational assistants, we study the usability of an EMA interface specifically aimed at minimizing the interruption burden caused by EMA… Read more

    Workshop PaperPerComEcological Momentary AssessmentSpeech RecognitionData Annotation
    2019

    Leveraging Active Learning and Conditional Mutual Information to Minimize Data Annotation in Human Activity Recognition

    A difficulty in human activity recognition (HAR) with wearable sensors is the acquisition of large amounts of annotated data for training models using supervised learning approaches. While collecting raw sensor data has been made easier with advances in mobile sensing and computing, the process of data annotation remains a time-consuming and onerous process. This paper explores active learning as a way to minimize the labor-intensive task of labeling data… Read more

    Journal PaperIMWUTActive LearningHuman Activity RecognitionData AnnotationConditional Mutual Information

    Towards a Generalizable Method for Detecting Fluid Intake with Wrist-Mounted Sensors and Adaptive Segmentation

    Over the last decade, advances in mobile technologies have enabled the development of intelligent systems that attempt to recognize and model a variety of health-related human behaviors. While automated dietary monitoring based on passive sensors has been an area of increasing research activity for many years, much less attention has been given to tracking fluid intake. In this work, we apply an adaptive segmentation technique on a continuous stream of inertial data captured with a practical, off-the-shelf wrist-mounted device to detect fluid intake gestures passively… Read more

    Conference PaperIUIFluid IntakeAutomated Dietary MonitoringInertial SensingSmartwatchUbiquitous and Mobile Computing
  • © Rebecca Adaimi 2025