Thursday, 29 April 2010

Assistive technologies for digital scholars

This is the first post in a new thread in this blog, in which I'm going to report on my experiences with a bunch of new tools I've been given to tackle my RSI problems.

Literacy practices in digital contexts tend to depend pretty heavily on the use of keyboards, and keyboards cause RSI. The tools I've got include: Dragon speech recognition software; a digipad & software for recording and converting handwritten text; a dictaphone & software for recording and converting speech to text. I've also been offered 2 days training in using these tools but I haven't managed to arrange this yet. For the moment I'm seeing how I get on by trial and error.
Trial number 1: the digipad:

I used the digipad to write a page of handwritten text, including a freehand drawing. Here's what it looked like:
My handwriting is quite good (years of teaching with black & whiteboards) but I didn't really expect the software to be able to read it. However, this is how it rendered it in 'Graphics and text' mode:

Pretty good, I thought. The next step was to convert it into a Word document so that I could edit it. The software has an 'export to Word' function. This is what it produced:

Not so good. The text has been put in frames and arbitrarily laid out so that it overlaps the drawing. However, it's still recognisable, and with a bit of editing...

The problem is: how much time am I going to have to spend on the keyboard in order to make this presentable. Here are my track changes in Word:

Quite a few - and it took me about 10 minutes of typing and mouse-moving. Luckily I also have a new ergonomic keyboard which reduces some of the strain on my arms.

So - the end product:

Not bad. It took a bit longer than I would have liked, and I've lost the nose off my drawing, but overall it has been less time at the keyboard than if I had tried to do it in Word from the start and I suspect that the software will get better at recognising my handwriting and I'll start to find some short cuts. So this is definitely an option.

Now to get going with the speech recognition.

1 comment:

  1. This is an interesting topic for a blog. when I started my PhD (in 2001) one of the first thing my supervisor asked me was whether I could touch type! I was suprised at this as voice recognition was being talked about and I kind of (naively!) assumed that typing wouldn't be as important by the time I finished my phd, why bother learning to touch type?

    Naturally I did learn, and I'm very grateful that I did, and I still think it's an important skill (my handwriting is appalling now and I can barely read it, let alone a computer!).

    Just another example of an old, inefficient technology/ practice persisting long beyond its best before date!