Accepted Special Sessions

Currently, there are 5 accepted special sessions which are summarized below:

Adjustable Human Autonomy Collaboration

Organizer: Daniel Lafond (daniel.lafond@thalesgroup.com)

With the accelerating pace of autonomous system development, effective and reliable human-autonomy teaming (HAT) is becoming critical issue for the successful integration of these technologies in various domains set to benefit from combining the best of human and machine capabilities. Yet the promise of increased efficiency and capability also presents itself with risks of poor human-machine understanding, coordination and adjustment to changing contexts. In human-machine collaboration, the machine is viewed as a teammate rather than as a mere tool. This entails important new challenges in relation with how humans, machines and the environment interact with each other, redirect resources, re-allocate tasks, modify workflow parameters, or adjust task sequences, while considering safety and security constraints.

Adjustable Human-Autonomy Collaboration refers to a set of processes that support a context-aware configuration or reorganization of human and cyber-physical elements into a fluid team structure that seeks to optimize role assignment, workload and levels of autonomy in order to enhance overall system capacity, efficiency, robustness or other key performance indicators, in a given safety and security environment. This special session seeks to bring together new perspectives and methods contributing to enable adjustable HAT, with a focus on human-aware and context-aware systems, as well as teamwork processes, measures and experimentations.

Human-Machine Teaming (HMT) on the Battlefield – Situation Awareness and Decision Making between Operators and Robotic & Autonomous Systems (RAS)

Organizer: Trevor Dobbins (trevor.dobbins@rbsl.com)

To move the emphasis of RAS design from engineering, remote control and simple navigation / obstacle avoidance to cooperative HMT working, e.g., the command of multiple RAS by a single operator with the application of AI to support operational effectiveness and safety.

Overarching themes:

  • Human-Machine Teaming
  • Situation Awareness (SA)
  • Decision making
  • Decision support
  • Human Machine Interface (HMI) / Human Computer Interface (HCI)
  • System design

The contemporary battlefield is characterized by the proliferation of UAVs, and the recognition by Army’s that the future battlefield will see the increasing deployment and utilization of both ground and air Robotic & Autonomous Systems (RAS). The majority of current RAS are remote-controlled, or have a limited degree of automation providing navigation and obstacle avoidance.

The operational and human factors communities have recognized that the transition to increasing automation and autonomy in RAS needs to be managed to ensure operational effectiveness, safety and trust are embedded within the RAS system of systems. An example being an understanding of how single operators can effectively command multiple RAS with the appropriate level of situation awareness to support their risk-based decision making to deliver the required combat effect.

As the AI decision support systems become more effective the systems design process will move from being human/user-center to being collaboration-centered design – where the system is designed for the operator and AI to be mutually supportive.

Interactive and Wearable Computing and Devices

Organizers: Giancarlo Fortino (g.fortino@unical.it), Peter X. Liu (xpliu@sce.carleton.ca), Zhelong Wang (wangzl@dlut.edu.cn), Ye Li (ye.li@siat.ac.cn)

Interactive devices refer to any physical and tangible entity with which both human users and other devices or machines can interact. Special focus is on those devices that human users can wear, such as smart watches, health monitoring electronics, smart glasses, head mounted stereo display, exoskeletons, body worn sensors, etc. An interactive and wearable device usually provides multimodal interfacing, sensory, and/or even actuating/motoring capabilities in addition to wearability, smartness, data input, communication and data recording and analysis. There are many potential applications, particularly in healthcare, wellness, consumer electronics, entertainment, Smart-* (home, buildings, factory, port, city) and military. With the recent availability of products on the market, such as Google Glasses, Apple iWatch, Shimmer wearable sensors, and many other more, interactive and wearable devices continue to attract the interest of both research communities and industry sectors and are expected to grow rapidly on the horizon. Interactive and wearable devices coupled with computing and systems is very multi-disciplinary and the research frontiers. This Special Session aims at advancing the state-of-the-art and prompting the research, development, and innovative applications of interactive and wearable computing and devices. Prospective authors are invited to submit original papers to the Special Session in the areas described below.

  • Intelligent user interfaces
  • Multimodal interaction
  • Emotion recognition and prediction
  • Smart sensors and actuators
  • Body area networks
  • Mobile and wearable computing
  • Affective computing
  • Human-machine systems
  • Communications
  • User safety
  • Security and privacy

Trustworthy Autonomous Systems

Organizers: Svetlana Yanushkevich (syanshk@ucalgary.ca), Konstantinos N. Plataniotis (kostas@ece.utoronto.ca.ca), S. Jamal Seyedmohammadi

Theme 1: Federated Learning for Trustworthy Human Autonomy Teaming

Keywords: Autonomous Systems, Trustworthy Human-Autonomy Teaming, 

Brief Description: Over the past decade, there has been a growing surge of interest in development, and investigation of Autonomous Systems (ASs), characterized by advanced levels of autonomy. This trend has emerged in response to the escalating demands for increased complexity and autonomy across various interconnected domains. An AS is an artificial entity endowed with the capability to execute a predetermined set of tasks with exceptional precision and independence. In a broad sense, a fully autonomous system can achieve the following objectives:

  • Acquire or assimilate information about its environment.
  • Operate over an extended time frame without requiring human intervention.
  • Move autonomously, whether fully or partially, within its operational environment.
  • Safeguard itself against potentially harmful situations.
  • Continuously acquire new knowledge to adapt to changing environments.

With the exponential growth in Artificial Intelligence (AI) and the advancements in Deep Neural Networks (DNNs), autonomous systems are garnering increasing attention across a wide array of practical applications with significant engineering implications. These applications encompass complex perception-action cycles, including but not limited to surveillance, cognitive radio, traffic control, and robot-mediated industrial and domestic functions. However, despite these recent advancements, ASs are still in their early stages of development and exhibit certain limitations. For instance, they often lack adaptability when confronted with internal and external non-stationary conditions. In the real world, many ASs frequently encounter non-stationary conditions, often arising from uncertain interactions with their environment and users, cyber-attacks, system failures, or structural alterations. These limitations underscore the pressing need for innovative and novel signal processing and machine learning models. Such models are essential for the continual improvement and enhancement of advanced autonomous human-machine interactions.

Theme 2: Federated Learning for Trustworthy Human Autonomy Teaming

Keywords: Federated Learning, Wireless Edge Caching (WEC), Over-the-Air Computation (AirComp)

Brief Description: In centralized Machine Learning (ML) methods, data is typically collected from various sources and centralized in a single location for model training. However, this centralized approach raises concerns about data privacy, security, and the need to transfer large volumes of data. Federated Learning (FL) addresses these challenges by allowing the training of models directly on the data sources without the need to transfer the raw data to a central server. Instead, only the updates, such as gradients or models’ knowledge, of locally trained models in each data source, often edge devices or local servers, are shared with a central server or aggregator. The aggregator combines these updates to improve the global model, which is then distributed back to the participating devices for further training iterations. FL leverages distributed computing resources, enabling parallel training on multiple devices concurrently. This leads to efficient use of available computational power and accelerates the training process. Additionally, it can accommodate devices with varying computational capabilities and network connectivity, allowing a wide range of devices to participate in the training process. It is applicable to various domains, including but not limited to healthcare, finance, IoT, human-computer interaction, recommendation systems, and edge computing, where data privacy, computation and communication efficiency, and security are critical considerations. The objective of the proposed special session is to collect novel ideas and experiments on the use of FL in various applications and solve the problems of centralized ML methods, which is a barrier to make use of these models in real world applications.

Human-Machine Teaming for Effective Decision-Making

Organizers: Scott Fang (Scott.Fang@forces.gc.ca), Wenbi Wang, Shadi Ghajar-Khosravi, Vladimir Zotov, Geoffrey Ho, Robert Arrabito, Aren Hunter, Hengameh Irandoust, Benoit Ricard, Jack Collier, Simon Monckton, Ming Hou

How to maximize Human-Machine System (HMS) performance for the Canadian Armed Forces (CAF) and the Department of National Defence (DND) with the optimization of human and machine capabilities to achieve effective teamwork? While Artificial Intelligence (AI) and Autonomy are taking more decision-making tasks and humans are playing more supervisory roles maintaining “human-on-the-loop”, Human-Machine Teaming (HMT) issues are becoming paramount. A key challenge lies in identifying an appropriate level of autonomy in an HMS while maintaining human Situation Awareness (SA) in joint teamwork, for instance, in case of the need for human control during emergencies. As such, HMT related research spans a wide range of areas including SA sharing, mutual understanding, trust in joint decisions, actions, behaviors, and potential impacts on system safety and effectiveness, as well as proper legal, ethical, social implications. More specifically, what needs to be transparent, how to explain AI behaviors and decisions, when redundant controls are needed with collective intelligence of human and machine are a few examples of the problem space. In summary, key HMT issues to be investigated under this special session should concentrate on identifying, developing, and demonstrating concepts, methodologies, and technologies to enable effective decision-making and maximize team performance for future HMT operations.