Computer Vision And Pattern Analysis Laboratory Home Page  Home
People  People
Publications  Publications
Publications  Databases
Contact Information  Contact
Supported Research Projects  Supported Research Projects
Research Activites  Research Activites
Research Groups
SPIS - Signal Processing and Information Systems Lab.SPIS - Signal Processing and Information Systems Lab.
Medical Vision and Analysis Group  Medical Research Activities
Biometrics Research Group  Biometrics Research Group
SPIS - Signal Processing and Information Systems Lab.MISAM - Machine Intelligence for Speech Audio and Multimedia.
Knowledge Base
  Paper Library
Incorporation of a Language Model into a Brain Computer Interface Based Speller
Authors: Cagdas Ulas
Published in: Sabanci University Research Database
Publication year: 2013
Abstract: Brain computer interface (BCI) research deals with the problem of establishing direct communication pathways between the brain and external devices. The primary motiva-tion is to enable patients with limited or no muscular control to use external devices by automatically interpreting their intent based on brain electrical activity, measured by, e.g., electroencephalography (EEG). The P300 speller is a widely practised BCI set up that involves having subjects type letters based on P300 signals generated by their brains in response to visual stimuli. Because of the low signal-to-noise ratio (SNR) and variability of EEG signals, existing typing systems use many repetitions of the visual stimuli in order to increase accuracy at the cost of speed. The main motivation for the work in this thesis comes from the observation that the prior information provided by both neighbouring and current letters within words in a particular language can assist letter estimation with the aim of developing a system that achieves higher accuracy and speed simultaneously. Based on this observation, in this thesis, we present an approach for incorporation of such information into a BCI-based speller through Hidden Markov Models (HMM) trained by a language model. We then describe filtering and smoothing algorithms in conjunction with n-gram language models for inference over such a model. We have designed data collection experiments for offline and online decision-making which demonstrate that in-corporation of the language model in this manner results in significant improvements in letter estimation and typing speed.
  download full paper

Home Back