Literacy practices in digital contexts tend to depend pretty heavily on the use of keyboards, and keyboards cause RSI. The tools I've got include: Dragon speech recognition software; a digipad & software for recording and converting handwritten text; a dictaphone & software for recording and converting speech to text. I've also been offered 2 days training in using these tools but I haven't managed to arrange this yet. For the moment I'm seeing how I get on by trial and error.
Trial number 1: the digipad:
I used the digipad to write a page of handwritten text, including a freehand drawing. Here's what it looked like:
My handwriting is quite good (years of teaching with black & whiteboards) but I didn't really expect the software to be able to read it. However, this is how it rendered it in 'Graphics and text' mode:
Pretty good, I thought. The next step was to convert it into a Word document so that I could edit it. The software has an 'export to Word' function. This is what it produced:
Not so good. The text has been put in frames and arbitrarily laid out so that it overlaps the drawing. However, it's still recognisable, and with a bit of editing...The problem is: how much time am I going to have to spend on the keyboard in order to make this presentable. Here are my track changes in Word:
Quite a few - and it took me about 10 minutes of typing and mouse-moving. Luckily I also have a new ergonomic keyboard which reduces some of the strain on my arms.So - the end product:
Not bad. It took a bit longer than I would have liked, and I've lost the nose off my drawing, but overall it has been less time at the keyboard than if I had tried to do it in Word from the start and I suspect that the software will get better at recognising my handwriting and I'll start to find some short cuts. So this is definitely an option.Now to get going with the speech recognition.