<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Journal Paper on</title><link>https://www.rebeccaadaimi.com/categories/journal-paper/</link><description>Recent content in Journal Paper on</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Wed, 01 Jun 2022 00:00:00 +0000</lastBuildDate><atom:link href="https://www.rebeccaadaimi.com/categories/journal-paper/index.xml" rel="self" type="application/rss+xml"/><item><title>Leveraging Sound and Wrist Motion to Detect Activities of Daily Living with Commodity Smartwatches</title><link>https://www.rebeccaadaimi.com/publications/audioimuwatch/</link><pubDate>Wed, 01 Jun 2022 00:00:00 +0000</pubDate><guid>https://www.rebeccaadaimi.com/publications/audioimuwatch/</guid><description>Automatically recognizing a broad spectrum of human activities is key to realizing many compelling applications in health, personal assistance, human-computer interaction and smart environments. However, in real-world settings, approaches to human action perception have been largely constrained to detecting mobility states, e.g., walking, running, standing. In this work, we explore the use of inertial-acoustic sensing provided by off-the-shelf commodity smartwatches for detecting activities of daily living (ADLs). We conduct a semi-naturalistic study with a diverse set of 15 participants in their own homes and show that acoustic and inertial sensor data can be combined to recognize 23 activities&amp;hellip;</description></item><item><title>Lifelong Adaptive Machine Learning for Sensor-based Human Activity Recognition Using Prototypical Networks</title><link>https://www.rebeccaadaimi.com/publications/lapnet-har/</link><pubDate>Fri, 11 Mar 2022 00:00:00 +0000</pubDate><guid>https://www.rebeccaadaimi.com/publications/lapnet-har/</guid><description>Continual learning, also known as lifelong learning, is an emerging research topic that has been attracting increasing interest in the field of machine learning. With human activity recognition (HAR) playing a key role in enabling numerous real-world applications, an essential step towards the long-term deployment of such recognition systems is to extend the activity model to dynamically adapt to changes in people&amp;rsquo;s everyday behavior. Current research in continual learning applied to HAR domain is still under-explored with researchers exploring existing methods developed for computer vision in HAR. Moreover, analysis has so far focused on task-incremental or class-incremental learning paradigms where task boundaries are known. This impedes the applicability of such methods for real-world systems since data is presented in a randomly streaming fashion. To push this field forward, we build on recent advances in the area of continual machine learning and design a lifelong adaptive learning framework using Prototypical Networks&amp;hellip;</description></item><item><title>Ok Google, What Am I Doing? Acoustic Activity Recognition Bounded by Conversational Assistant Interactions</title><link>https://www.rebeccaadaimi.com/publications/okgoogle/</link><pubDate>Mon, 01 Mar 2021 00:00:00 +0000</pubDate><guid>https://www.rebeccaadaimi.com/publications/okgoogle/</guid><description>Conversational assistants in the form of stand-alone devices such as Amazon Echo and Google Home have become popular and embraced by millions of people. By serving as a natural interface to services ranging from home automation to media players, conversational assistants help people perform many tasks with ease, such as setting times, playing music and managing to-do lists. While these systems offer useful capabilities, they are largely passive and unaware of the human behavioral context in which they are used. In this work, we explore how off-the-shelf conversational assistants can be enhanced with acoustic-based human activity recognition by leveraging the short interval after a voice command is given to the device&amp;hellip;</description></item><item><title>Leveraging Active Learning and Conditional Mutual Information to Minimize Data Annotation in Human Activity Recognition</title><link>https://www.rebeccaadaimi.com/publications/al/</link><pubDate>Mon, 09 Sep 2019 00:00:00 +0000</pubDate><guid>https://www.rebeccaadaimi.com/publications/al/</guid><description>A difficulty in human activity recognition (HAR) with wearable sensors is the acquisition of large amounts of annotated data for training models using supervised learning approaches. While collecting raw sensor data has been made easier with advances in mobile sensing and computing, the process of data annotation remains a time-consuming and onerous process. This paper explores active learning as a way to minimize the labor-intensive task of labeling data&amp;hellip;</description></item></channel></rss>