Seminar 5 - Data representation
Friday, November 2, 2007 at 16:58
martian77 in HCCS Adv

Well, that's a coincidence! The reading for this seminar picked up really nicely on some of the things in the other two papers I'd just read!

The Sonic Finder paper from Gaver was fascinating, because it was written in 1989. It was all about how you can use sound as part of an interface to enhance the experience and help the user. For example, the sound of a file being trashed, or a noise playing once you've hit the icon. He was discussing  how to use sound to give the user some peripheral information that they wouldn't normally get, and give a 'feel' for the item. For example, use a family of noises for a file, with a lower noise indicating a larger file. Dragging noises could change depending on the surface the object is being dragged over, like a dragging over floorboards for empty screen, or over glass for a window.

It was a very literal interpretation, which was interesting. If you think about it, it's not really that obvious that it needs to be that literal. How long would it take to learn that a sound goes with a particular action? I mean, that's how the associations have been formed for the noise of hitting something big vs something small. It also relies quite heavily on the right cultural associations being made. There are enough cultural things associated with presenting information globally (from direction of writing, through to how images are interpreted) without added sounds to the mix. But correspondingly how easy would it be to find sounds that have no cultural implications? And how easy would they be to tell apart?

Earcon is the term for these non-literal noises. And looks like Stephen A. Brewster is the earcon man, although it does look like it's becoming a general term from the search results! (And were defined by Blattner, Sumikawa and Greenberg in 1989) "An evaluation of earcons for use in auditory human-computer interfaces" by S. Brewster,  P. Wright and A. Edwards (ACM, proceedings of CHI '93) has some good bits on why bother using sound as well as visuals for sighted and non-sighted individuals.

The other paper in the reading was McGookin and Brewster on a graph building, published in 2006. This was a paper about teaching blind people to understand and draw graphs. It was apparently awarded the best HCI paper award for 2006, and it's very thorough. Their system used sonic and haptic feedback in a really complementary way to try to represent what was going on on the screen, but apparently while it was streets ahead of current graph-teaching techniques was still subject to a lot of 'off-by-one' errors. Graham's only real criticism of the paper was in the interaction method chosen - a very expensive pen-like device called a phantom, which moves in 3d (when the screen is 2d). The pen metaphor is not going to be particularly familiar to many blind users, and the expense means not many are going to be able to own one. That's yet another factor to consider in this universal usability drive. Devices need to be affordable!

There was some interesting information on using different modes of entry (using a shift key) and whether that was a good idea or not, and if not how do you work around that?

It would be quite interesting to try to come up with a Phidget device to work on the problem... But we've already got one problem to work on!  

Article originally appeared on Life on Mars (http://www.martiandaze.net/).
See website for complete article licensing information.