You are here

Multimodal Interfaces for Synthetic Training Environments (MIST)

Award Information
Agency: Department of Defense
Branch: Army
Contract: W91CRB-08-C-0009
Agency Tracking Number: A072-194-0149
Amount: $69,779.00
Phase: Phase I
Program: SBIR
Solicitation Topic Code: A07-194
Solicitation Number: 2007.2
Timeline
Solicitation Year: 2007
Award Year: 2007
Award Start Date (Proposal Award Date): 2007-11-06
Award End Date (Contract End Date): 2008-05-06
Small Business Information
625 Mount Auburn Street
Cambridge, MA 02138
United States
DUNS: 115243701
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
Principal Investigator
 Ryan Kilgore
 Senior Scientist
 (617) 491-3474
 rkilgore@cra.com
Business Contact
 Jennifer Barron
Title: Director, Contracts
Phone: (617) 491-3474
Email: jbarron@cra.com
Research Institution
N/A
Abstract

Current synthetic training environments are limited in their effectiveness by their exclusive use of conventional human-computer interaction techniques (e.g., mouse/keyboard/joystick). To successfully train soldiers in small-unit tactics, trainees need to be able to interact with semi-automated forces through the use of realistic gestural and verbal communication interfaces. To improve training by incorporating realistic interfaces, we propose to design and demonstrate Multimodal Interfaces for Synthetic Training environments (MIST) through the use of a reconfigurable hardware and software system. Four components define our approach: First, we will determine the types of communication needed by analyzing soldier communication needs through studying doctrine documents and interviewing and observing subject matter experts. Second, we will use this analysis to drive the design of multimodal interfaces for use in synthetic training environments. This effort will focus on the use of voice recognition to support verbal commands, and the use of inertial sensors to support gestures. Third, we will design and demonstrate an innovative software and hardware system that will support configuration, selection, and adaptation of multimodal interface methods. Fourth, we will define initial evaluation metrics and develop an evaluation methodology for assessing the utility and efficacy of the proposed multimodal interface methods.

* Information listed above is at the time of submission. *

US Flag An Official Website of the United States Government