I am writing to you because of your interest in the design of systems for disabled people.
My aunt recently underwent an eye operation to correct her age-related macular degeneration. This pioneering operation involved inserting additional lenses into her eye in order to direct images away from the diseased parts of her retina to those unaffected by the condition.
After a successful operation, my aunt, who is in her eighties, had to undergo six weeks of exercises designed to ‘retrain’ her brain to understand the new signals it now receives. The training involves staring for half an hour each day at words that are flashed on a computer screen.
My aunt was supplied with training software on a standard Eee notebook by a company called Veni Vidi. However, she cannot use the software unaided because she is unable to see the on-screen controls.
She can only guess the position of buttons such as start, exit and finish: when she moves the cursor across the finish button it comes up bright pink, which is difficult for her to make out.
Coupled with the fact that she is unfamiliar with computer technology, she has numbness in her fingers that makes it difficult for her to use the touch pad to control the cursor or to find the small on/off button on the back of the machine All this means that someone has to be present during her sessions with the system.
I can’t understand why the software isn’t voice controlled, after all by definition the people who use it have trouble seeing and are unlikely to be computer literate.