• Home
  • Publications
  • CV
profile picture

Rebecca Adaimi, PhD

"Towards lifelong modeling of human behavior"

  • © Rebecca Adaimi 2025
    2022

    Leveraging Sound and Wrist Motion to Detect Activities of Daily Living with Commodity Smartwatches

    Automatically recognizing a broad spectrum of human activities is key to realizing many compelling applications in health, personal assistance, human-computer interaction and smart environments. However, in real-world settings, approaches to human action perception have been largely constrained to detecting mobility states, e.g., walking, running, standing. In this work, we explore the use of inertial-acoustic sensing provided by off-the-shelf commodity smartwatches for detecting activities of daily living (ADLs). We conduct a semi-naturalistic study with a diverse set of 15 participants in their own homes and show that acoustic and inertial sensor data can be combined to recognize 23 activities… Read more

    Journal PaperIMWUTMultimodal classificationSound SensingInertial SensingHuman Activity RecognitionAudio ProcessingSmartwatchDatasetActivities of Daily LivingUbiquitous and Mobile Computing
    2021

    Ok Google, What Am I Doing? Acoustic Activity Recognition Bounded by Conversational Assistant Interactions

    Conversational assistants in the form of stand-alone devices such as Amazon Echo and Google Home have become popular and embraced by millions of people. By serving as a natural interface to services ranging from home automation to media players, conversational assistants help people perform many tasks with ease, such as setting times, playing music and managing to-do lists. While these systems offer useful capabilities, they are largely passive and unaware of the human behavioral context in which they are used. In this work, we explore how off-the-shelf conversational assistants can be enhanced with acoustic-based human activity recognition by leveraging the short interval after a voice command is given to the device… Read more

    Journal PaperIMWUTHuman Activity RecognitionAudio ProcessingConversational Assistants
    2019

    Leveraging Active Learning and Conditional Mutual Information to Minimize Data Annotation in Human Activity Recognition

    A difficulty in human activity recognition (HAR) with wearable sensors is the acquisition of large amounts of annotated data for training models using supervised learning approaches. While collecting raw sensor data has been made easier with advances in mobile sensing and computing, the process of data annotation remains a time-consuming and onerous process. This paper explores active learning as a way to minimize the labor-intensive task of labeling data… Read more

    Journal PaperIMWUTActive LearningHuman Activity RecognitionData AnnotationConditional Mutual Information
  • © Rebecca Adaimi 2025