Abstract ||
Active Inference Framework for Designing Efficient Brain-Machine Interfaces

UIUC

 


Abstract:

Brain-machine interfaces allow human users to control external devices with their thoughts. For instance, inputs from the human user might be obtained non-invasively at low bit-rates through an electroencephalograph (EEG), and these inputs might then be used to control an external device such as a robotic wheelchair. The goal in the development of such interfaces is to improve the lives of individuals with impaired sensory-motor function, such as patients with severe spinal cord injuries.

In this talk, I will present a framework for the design of interfaces that can only obtain noisy and discrete inputs at high latency from a human user (e.g., with EEG) to control a robotic system that can provide visual feedback. In this framework, the user communicates their intent by providing inputs in response to a sequence of queries posed by the robot. The approach is to construct an active inference policy that finds the query with the maximum value (e.g., information content) given a Bayesian estimate of the user’s intent after the responses to previous queries, and an estimate of how quickly and accurately the robot can obtain the user's response. Under certain conditions, this policy reduces to the optimal feedback policy for transmitting a message between two computational agents over the underlying communication channel. Remarkably, for an interesting class of user intents (e.g., desired paths for robotic navigation), the queries synthesized by the optimal feedback policy can be easily understood and used by humans to convey their intent to the robot.

Using this framework, we developed three EEG-based brain-machine interfaces that allowed human users to efficiently control robotic systems with input commands that were discrete, noisy and that had high latency. The first two interfaces enabled users to navigate a simulated aircraft flying at a fixed speed and altitude, and a virtual mobile robot moving indoors over desired paths with a sequence of binary inputs obtained from EEG. The desired paths were modeled as strings of motion primitives from a finite alphabet for navigating the aircraft, and as local geodesics that minimized a cost function recovered from human-demonstrated data for navigating the mobile robot indoors. The third interface enabled users to specify text with discrete inputs obtained from EEG at a rate twice as fast as they would using prior state-of-the-art text entry methods under the same input mechanism.

Bio:
Abdullah Akce is a Ph.D. candidate in Computer Science at the University of Illinois at Urbana-Champaign. He has worked with Prof. Timothy Bretl (Aerospace Engineering) to improve the performance of brain-machine interfaces for the control of robotic systems. He holds a B.S. degree in Computer Engineering from Bogazici University, Turkey. He expects to graduate with Ph.D. degree in May 2013.