The Fourth Workshop on Continual and Multimodal Learning for Internet of Things

November 06, 2022 • Boston, Massachusetts, USA

Co-Located with SenSys 2022

Previous Editions: CML-IOT'21, CML-IOT'20, CML-IOT'19


The growth of the Internet of Things (IoT) has brought an ever-growing number of connected sensors, continuously streaming large quantities of multimodal data. These data come from a wide range of different sensing modalities and have distinct statistical characteristics over time, which are hardly captured by traditional learning methods. Continual and multimodal learning allows the integration, adaptation, and generalization of knowledge learned from experiential and heterogeneous data to new situations. Therefore, continual and multimodal learning is an important step to enable efficient information inference for IoT systems. CML-IOT welcomes works from diverse communities that introduce algorithmic and systemic approaches to leverage continual learning on multimodal data for applications and real-world computing systems for the Internet of Things.

Call for Papers

The Workshop on Continual and Multimodal Learning for Internet of Things (CML-IOT 2022) aims to explore the intersection of continual machine learning and multimodal modeling. The Internet of Things (IoT) has brought an ever-growing amount of multimodal sensing data (e.g., natural language, speech, image, video, audio, virtual reality, WiFi, GPS, RFID, vibration). The statistical properties of this data vary significantly over time and depending on the sensing modality; these differences are hardly captured by conventional learning methods. Continual and multimodal learning allows the integration, adaptation and generalization of knowledge learned from experiential and heterogenous data to new situations. Therefore, continual and multimodal learning is an important step to improve the estimation, utilization, and security of real-world data from IoT systems.

We welcome works addressing these issues in different applications and domains, such as natural language processing, computer vision, human-centric sensing, smart cities, health, etc. We aim to bring together researchers from different areas to establish a multidisciplinary community and share the latest research. We focus on novel learning methods that can be applied on streaming multimodal data with applications to the Internet of Things. Topics of interest include, but are not limited to:

  • Continual learning
  • Transfer learning
  • Federated learning
  • Few-shot learning
  • Multi-task learning
  • Reinforcement learning
  • Learning without forgetting
  • Individual and/or institutional privacy
  • Methods and architectures for partitioning on-device and off-device learning
  • Managing high volumes of data flow

  • We also welcome continual learning methods that target: data distribution changes caused by the fast-changing dynamic physical environment missing, imbalanced, or noisy data under multimodal data scenarios. Novel applications or interfaces on multimodal data are also related topics.

    We welcome works addressing challenges from a wide range of data and sensing modalities, including but not limited to: WiFi, LIDAR, GPS, RFID, visible light communication, vibration, accelerometer, pressure, temperature, humidity, biochemistry, image, video, audio, speech, natural language, AR/VR.

    Important Dates

  • Submission deadline: September 05, 2022, AoE September 19, 2022, AoE
  • Notification of acceptance: October 03, 2022, AoE October 10, 2022, AoE
  • Deadline for camera ready version: October 17, 2022, AoE
  • Workshop: November 06, 2022
  • Submit Now

    Submission Guidelines

    All submissions must use the LaTeX (preferred) or Word styles found here. LaTeX submissions should use the acmart.cls template (sigconf option), with the default 9-pt font. We invite papers of varying length from 2 to 6 pages, plus additional pages for the reference; i.e., the reference page(s) are not counted to the limit of 6 pages. Accepted papers will be included in the ACM Digital Library and supplemental proceedings of the conference. Reviews are not double-blind, and author names and affiliations should be listed.

    Invited Keynote Speakers


    Wen Hu

    University of New South Wales

    Privacy-Preserving Machine Learning in Sensor Rich IoT Systems

    Abstract: Sensor-rich IoT systems are becoming ubiquitous in our lives, from smart wristbands with IMU, to smartphones with depth cameras, to low-cost embedded networked radars. These systems are providing very good alternative ways for human context detection. Yet, making the robust inference from the multi-modality raw sensor data to individual's context in the wild remains difficult. Furthermore, human context may consist of sensitive information, which needs to be protected from malicious attackers. In this talk, I will discuss my group's ongoing research on addressing these challenges with example applications in fitness, health and cybersecurity.

    Bio: Wen Hu is a professor at School of Computer Science and Engineering, the University of New South Wales (UNSW). His current research focuses on novel applications, low-power communications, security, signal processing and machine learning in Cyber Physical Systems (CPS) and Internet of Things (IoT). Hu published regularly in the top rated sensor network and mobile computing venues such as ACM/IEEE IPSN, ACM SenSys, ACM MobiCOM and ACM UbiCOMP. He is an associate editor of ACM TOSN, the general chair of CPS-IoT Week 2020, co-chairs the program committee of ACM/IEEE IPSN 2023 and ACM Web Conference (WWW 2023, Systems and Infrastructure for Web, Mobile Web, and Web of Things track). Hu actively commercialises his research results in smart buildings and IoT, and his endeavours include Parking Spotz and WBS Tech. Prior to joining UNSW, he was a principal research scientist and research project leader at CSIRO.


    Wan Du

    University of California, Merced

    Arm Tracking by Multi-Modality Inertial Measurement Unit Sensors

    Abstract: Arm tracking is essential for many mobile applications, such as gesture recognition, fitness training, and smart health. Smartwatches have Inertial Measurement Unit (IMU) sensors, including accelerometer, gyroscope, and magnetometer. They provide a convenient way to track the orientation and location of the wrist. Current IMU-based orientation estimation is based on a fixed multi-modality data fusion scheme that does not adapt to the data quality variation of IMU sensors. Since existing location estimation relies on the estimated orientation result, a small orientation error may cause high inaccuracy in location estimation. Moreover, these location estimation algorithms, e.g., Hidden Markov Model and Particle Filters, cannot provide real-time results due to high computation overhead. In this talk, I will introduce my group's recent research on deep learning for multi-modality data fusion of IMU sensors, which can tackle the above limitations of current solutions.

    Bio: Dr. Wan Du is an assistant professor in the Department of Computer Science and Engineering at the University of California, Merced. His research interest includes the Internet of Things, Wireless Networking Systems, Cyber Physical Systems, and Deep Reinforcement Learning. His research results have been published in top conferences (e.g., ACM SenSys, ACM/IEEE IPSN, ACM MobiCom, and IEEE INFOCOM) and journals (e.g., IEEE ToN, IEEE TMC, and ACM TOSN). He has received the best paper award in ACM SenSys 2015, the best paper runner-up awards in ACM BuildSys 2022 and IEEE DCOSS 2021, and the best demo award in IEEE SECON 2014. He was one of the Distinguished TPC Members of INFOCOM 2022, 2020, and 2018.


    Workshop Chairs (Feel free to contact us by, if you have any questions.)
  • Stephen Xia (University of California, Berkeley and Columbia University)
  • Jingxiao Liu (Stanford University)
  • Tong Yu (Adobe Research)
  • Handong Zhao (Adobe Research)
  • Ruiyi Zhang (Adobe Research)

  • Steering Committee
  • Nicholas Lane (University of Cambridge and Samsung AI)
  • Lina Yao (University of New South Wales)
  • Jennifer Healey (Adobe Research)
  • Xiaofan (Fred) Jiang (Columbia University)
  • Hae Young Noh (Stanford University)
  • Shijia Pan (University of California Merced)
  • Susu Xu (Stony Brook University)

  • Technical Program Committee
  • Winston Chen (MIT)
  • Karthik Dantu (University at Buffalo)
  • Mi Zhang (Ohio State University)
  • Jingping Nie (Columbia University)
  • Jorge Ortiz (Rutgers University)
  • Yang Gao (Northwestern University)
  • Shibo Zhang (HP Labs)
  • Wei Ma (The Hong Kong Polytechnic University)
  • Yujie Wei (Meta)
  • Bingqing Chen (Bosch Center for AI)
  • Shuo Li (Flexport)
  • Chulhong Min (Nokia Bell Labs)
  • Zhanpeng Jin (University at Buffalo)
  • VP Nguyen (University of Texas at Arlington)
  • Anh Nguyen (University of Montana)

  • Program (UTC - 4)

    Zoom: link
    Welcome! (8:00 - 8:15)
    Keynote (8:15 - 9:15), Speaker: Prof. Wan Du, University of California Merced
    Session 1: Continuous and reinforcement learning for IoT (9:30 - 10:30)

    Towards Data-efficient Continuous Learning for Edge Video Analytics via Smart Caching
    Lei Zhang, Guanyu Gao (Nanjing University of Science and Technology); Huaizheng Zhang (Nanyang Technological University)

    Intelligent Continuous Monitoring to Handle Data Distributional Changes for IoT Systems
    Soma Bandyopadhyay, Anish Datta, Arpan Pal (TCS Research); Srinivas Raghu Raman Gadepally (TATA Consultancy Services)

    H-SwarmLoc: Efficient Scheduling for Localization of Heterogeneous MAV Swarm with Deep Reinforcement Learning
    Haoyang Wang, Xuecheng Chen, Yuhan Cheng (Tsinghua University); Chenye Wu (The Chinese University of Hong Kong, Shenzhen); Fan Dang, Xinlei Chen (Tsinghua University)

    Session 2: Multi-modal and Multi-task learning for IoT (11:00 - 12:00)

    GaitVibe+: Enhancing Structural Vibration-based Footstep Localization Using Temporary Cameras for In-home Gait Analysis
    Yiwen Dong, Jingxiao Liu, Hae Young Noh (Stanford University)

    Out-Clinic Pulmonary Disease Evaluation via Acoustic Sensing and Multi-task Learning on Commodity Smartphones
    Xiangyu Yin, Kai Huang, Erick Forno, Wei Chen, Heng Huang, Wei Gao (University of Pittsburgh)

    Discovering and Understanding Algorithmic Biases in Autonomous Pedestrian Trajectory Predictions
    Andrew Bae, Susu Xu (Stony Brook University)

    Keynote (2:00 - 3:00), Speaker: Prof. Wen Hu, The University of New South Wales
    Session 3: Learning with limited labeled data for IoT (3:30 - 4:30)

    Near-real-time Seismic Human Fatality Information Retrieval from Social Media with Few-shot Large-Language Models
    James Hou, Susu Xu (Stony Brook University)

    Memory-Efficient Domain Incremental Learning for Internet of Things
    Yuqing Zhao, Divya Saxena, Jiannong Cao (The Hong Kong Polytechnic University)

    Riemannian Geometric Instance Filtering for Transfer Learning in Brain-Computer Interfaces
    Qianxin Hui, Xiaolin Liu (Beihang University); Yang Li (Tsinghua University); Susu Xu (Stony Brook University); Shuailei Zhang, Ying Sun, Shuai Wang (Beihang University); Xinlei Chen (Tsinghua University); Dezhi Zheng (Beihang University)

    Business Meeting, Closing, and Awards (4:30)

    Copyright © All Rights Reserved | This template is made with by Colorlib