• Home
  • Publications
  • CV
profile picture

Rebecca Adaimi, PhD

"Towards lifelong modeling of human behavior"

  • © Rebecca Adaimi 2025
    2022

    AudioIMU: Enhancing Inertial Sensing-Based Activity Recognition with Acoustic Models

    Modern commercial wearable devices are widely equipped with inertial measurement units (IMU) and microphones. The motion and audio signals captured by these sensors can be used for recognizing a variety of user physical activities. Compared to motion data, audio data contains rich contextual information of human activities, but continuous audio sensing also poses extra data sampling burdens and privacy issues. Given such challenges, this paper studies a novel approach to augment IMU models for human activity recognition (HAR) with the superior acoustic knowledge of activities. Specifically, we propose a teacher-student framework to derive an IMU-based HAR model… Read more

    Conference PaperISWCKnowledge DistillationSound SensingInertial SensingHuman Activity RecognitionAudio Processing

    Leveraging Sound and Wrist Motion to Detect Activities of Daily Living with Commodity Smartwatches

    Automatically recognizing a broad spectrum of human activities is key to realizing many compelling applications in health, personal assistance, human-computer interaction and smart environments. However, in real-world settings, approaches to human action perception have been largely constrained to detecting mobility states, e.g., walking, running, standing. In this work, we explore the use of inertial-acoustic sensing provided by off-the-shelf commodity smartwatches for detecting activities of daily living (ADLs). We conduct a semi-naturalistic study with a diverse set of 15 participants in their own homes and show that acoustic and inertial sensor data can be combined to recognize 23 activities… Read more

    Journal PaperIMWUTMultimodal classificationSound SensingInertial SensingHuman Activity RecognitionAudio ProcessingSmartwatchDatasetActivities of Daily LivingUbiquitous and Mobile Computing

    Lifelong Adaptive Machine Learning for Sensor-based Human Activity Recognition Using Prototypical Networks

    Continual learning, also known as lifelong learning, is an emerging research topic that has been attracting increasing interest in the field of machine learning. With human activity recognition (HAR) playing a key role in enabling numerous real-world applications, an essential step towards the long-term deployment of such recognition systems is to extend the activity model to dynamically adapt to changes in people’s everyday behavior. Current research in continual learning applied to HAR domain is still under-explored with researchers exploring existing methods developed for computer vision in HAR. Moreover, analysis has so far focused on task-incremental or class-incremental learning paradigms where task boundaries are known. This impedes the applicability of such methods for real-world systems since data is presented in a randomly streaming fashion. To push this field forward, we build on recent advances in the area of continual machine learning and design a lifelong adaptive learning framework using Prototypical Networks… Read more

    Journal PaperSensors JournalContinual LearningLifelong LearningIncremental LearningOnline LearningPrototypical NetworksCatastrophic ForgettingHuman Activity Recognition
    2021

    Ok Google, What Am I Doing? Acoustic Activity Recognition Bounded by Conversational Assistant Interactions

    Conversational assistants in the form of stand-alone devices such as Amazon Echo and Google Home have become popular and embraced by millions of people. By serving as a natural interface to services ranging from home automation to media players, conversational assistants help people perform many tasks with ease, such as setting times, playing music and managing to-do lists. While these systems offer useful capabilities, they are largely passive and unaware of the human behavioral context in which they are used. In this work, we explore how off-the-shelf conversational assistants can be enhanced with acoustic-based human activity recognition by leveraging the short interval after a voice command is given to the device… Read more

    Journal PaperIMWUTHuman Activity RecognitionAudio ProcessingConversational Assistants
    2019

    Leveraging Active Learning and Conditional Mutual Information to Minimize Data Annotation in Human Activity Recognition

    A difficulty in human activity recognition (HAR) with wearable sensors is the acquisition of large amounts of annotated data for training models using supervised learning approaches. While collecting raw sensor data has been made easier with advances in mobile sensing and computing, the process of data annotation remains a time-consuming and onerous process. This paper explores active learning as a way to minimize the labor-intensive task of labeling data… Read more

    Journal PaperIMWUTActive LearningHuman Activity RecognitionData AnnotationConditional Mutual Information
  • © Rebecca Adaimi 2025