GUIDE – A Technical description 

GUIDE has four components:

  1. Hardware,
  2. Voice recognition software,
  3. A protocol or decision tree used to ‘scaffold’ users behaviour, and
  4. The GUIDE software program that coordinates the parts.

GUIDE – Hardware

The GUIDE prototype software runs on Windows XP enabled PCs. It can be run on laptops, PDAs or desktops.

Additionally, there needs to be hardware to enable audio presentation to the user and audio inputs from the user. This can be done using a high-quality wireless (Bluetooth or DECT) headset (with earphone and microphone). This can also be done without anything worn on the body of the user, using an array microphone and normal speaker, built into a countertop box.


GUIDE – Voice recognition software

The GUIDE monitors user’s progress by verbally prompting and asking questions, and receiving verbal answers. In order to receive verbal input the computer needs to run voice recognition software. There are now a range of voce recognition software available. We have found Dragon Naturally Speaking 9.5, the best option. Training in using the voice recognition software takes only a minute or two – enough time for the user to say each of the five commands three or four times

GUIDE – Protocol

The protocol or decision tree module within the GUIDE software is a carefully crafted sequence of steps and checks which can guide users to successful completion of the given task. The success of the GUIDE is almost completely dependent upon having a rigorous protocol, that will ‘scaffold’ the executive function of users, make the most of the self-monitoring skills that they have, and manage to lead users toward the goal, without leading them into error.

To date we have developed protocols for making tea, making a smoothie, limb donning and transfer from a wheelchair to a bed. In each case the protocol has been developed through extensive consultation and research. Consultation with occupational therapists, experts, carers and physiotherapists has been used to identify the most effective sequence of behaviours for the given task. Observation of users both aided and unaided engaging in the behaviour has also been used to identify possible mistakes and deviations from the desired sequence. This analysis was used to build checks into the GUIDE and make it robust to errors and deviations. Finally, the protocol has been based on observation of users actually using the GUIDE. These observations have enabled us to modify the steps, and add in additional steps so as to guard against mistakes.

Guide – representation and editing of a protocol

Below is a screen shot of the first two checks in the first step of the limb donning protocol. The step is in yellow at the top. The checks are in green, and run in a line from left to right. The problem solving routines (questions and actions) are in blue, and are related to a specific check. Each box has a green button (which plays the voice prompt) and a red button for recording a new voice prompt (the purple button stops the recording). From each box there are outputs (e.g., next, yes, no, done etc.). These outputs are connected to the relevant subsequent prompt. The voice prompts and the relations between them can be rapidly and graphically reconfigured.

GUIDE – Software programme

The GUIDE itself is a specially designed software program that can take input from the voice recognition program, and use this to trigger pre-recorded samples, thus providing prompts, asking questions, and respond to user input. At the heart of the GUIDE is a decision tree or protocol especially designed for the given task (such as transfer, donning a limb, or dressing). The GUIDE is written in the Pure Data programming language. Pure Data is an object-oriented programming environment that is optimised for real-time audio processing and user interaction and ideally suited for prototyping interactive software.

Technically the GUIDE software comprises three modules. The core module is the protocol or decision tree. When the program is initiated, the user is at the beginning of the protocol. The user then progresses through the protocol by responding to prompts and questions. The second module is designed to play audio files, or segments of audio files. As the GUIDE progresses through the protocol, it requests the audio module to play relevant samples, which the user hears as prompts and questions. The third module is designed to receive input from the voice recognition software. The voice recognition software outputs via the keyboard, and this third module monitors the keyboard searching for predefined strings of symbols that correspond to the voice commands used to control the GUIDE.

Guide control panel – for loading and running a protocol