hit counter

I'm currently working on a DPhil in HCT at the University of Sussex. This section of the website is for an on-going 'learning diary', for me to write my thoughts and notes on various courses and my thesis.

Monday
Nov192007

Assignment 2 - Big, open-ended, Flash...

I'm so behind on writing up these lectures. I will catch up. I'm not sure when, but I will.

I definitely need to catch up on last Thursday's. I was ill and missed the lecture. It looks like Assignment 2 (the big one... 75% of the course riding on this baby) was revealed. I have to come up with a large Flash executable (well, I could use Director, but there's little point I think) of my choice.

This is where the constraints aiding design comes in. I've got so many different ideas going on. How to select what to do? I've tried drawing up a quick mind-map thing. I think I want to use mostly content I already have. So photos, blogposts etc. But I want to do some designing, and plan with some fonts, colour and layout bits. And then there's the stuff I learnt about at Flash on the Beach. Particles, sound integration...

I need to go back to Chris Orwig's talk maybe, about condensing and focussing, and do some small stuff really well. Make this something I can keep for my portfolio. Get something consistent and good, rather than lots of little bits mishmashed together.

Thursday
Nov152007

Seminar 7 - Accessible Games

Sick sick sick. Dammit.

Will have to follow up on the reading for this once I'm well again... 

Sunday
Nov112007

Seminar 6 - The standards

I'd love to say these were interesting reading, but they really aren't!

There are lots of standards around web accessibility in particular. These include the Web Accessibility Initiative set WCAG 1.0, with its three levels of priority. Which I've seen quite a lot of now. Roll on WCAG 2.0, see if they are any better!

PAS 78 is aimed at people who commission websites, and I first came across this at the Geek Girl Dinner i went to where Julie Howell (the author) presented. It's an interesting hole to identify. I'm intigued by it, especially given that I'm working in a world where the requirements are identified by non-developers, and it would be all too easy for accessibility stuff to fall through the (giant) cracks between analysis, design and build. There are a number of reviews of this around too. Frequently by people who had a hand in writing or reviewing it, but there you go!These include Bruce Lawson, out-law.com and the BBC. Amongst many others. This is not law, but it could be helpful to read to stay on the right side of the Disability Discrimination Act.

Section 508 is the US ruling, that applies to all Federal websites. That's interesting in itself. It only applies to Federal websites, not sites in general. I guess one of the problems with making legislation concerning websites in any individual country is always going to be working out whether the website in question actually falls under the juristiction of that country.

There are problems with just following these standards without thought. It is perfectly possible to create a site that totally conforms to the very highest standards but is still totally unusable by anyone. And the automatic site checkers that are available can't check that side of things. I think there might be the corresponding case too, where you have a really great site with some innovative features that make it really easy to use for everyone, but you missed an alt tag or three so you're not 'accessible'. I would be really worried that standards might make it too easy to dismiss the difficulties in making a site truly accessible, and prevent further innovations in the area.

I could do with doing some reading on the pros and cons before the exam!  

Friday
Nov022007

Seminar 5 - Data representation

Well, that's a coincidence! The reading for this seminar picked up really nicely on some of the things in the other two papers I'd just read!

The Sonic Finder paper from Gaver was fascinating, because it was written in 1989. It was all about how you can use sound as part of an interface to enhance the experience and help the user. For example, the sound of a file being trashed, or a noise playing once you've hit the icon. He was discussing  how to use sound to give the user some peripheral information that they wouldn't normally get, and give a 'feel' for the item. For example, use a family of noises for a file, with a lower noise indicating a larger file. Dragging noises could change depending on the surface the object is being dragged over, like a dragging over floorboards for empty screen, or over glass for a window.

It was a very literal interpretation, which was interesting. If you think about it, it's not really that obvious that it needs to be that literal. How long would it take to learn that a sound goes with a particular action? I mean, that's how the associations have been formed for the noise of hitting something big vs something small. It also relies quite heavily on the right cultural associations being made. There are enough cultural things associated with presenting information globally (from direction of writing, through to how images are interpreted) without added sounds to the mix. But correspondingly how easy would it be to find sounds that have no cultural implications? And how easy would they be to tell apart?

Earcon is the term for these non-literal noises. And looks like Stephen A. Brewster is the earcon man, although it does look like it's becoming a general term from the search results! (And were defined by Blattner, Sumikawa and Greenberg in 1989) "An evaluation of earcons for use in auditory human-computer interfaces" by S. Brewster,  P. Wright and A. Edwards (ACM, proceedings of CHI '93) has some good bits on why bother using sound as well as visuals for sighted and non-sighted individuals.

The other paper in the reading was McGookin and Brewster on a graph building, published in 2006. This was a paper about teaching blind people to understand and draw graphs. It was apparently awarded the best HCI paper award for 2006, and it's very thorough. Their system used sonic and haptic feedback in a really complementary way to try to represent what was going on on the screen, but apparently while it was streets ahead of current graph-teaching techniques was still subject to a lot of 'off-by-one' errors. Graham's only real criticism of the paper was in the interaction method chosen - a very expensive pen-like device called a phantom, which moves in 3d (when the screen is 2d). The pen metaphor is not going to be particularly familiar to many blind users, and the expense means not many are going to be able to own one. That's yet another factor to consider in this universal usability drive. Devices need to be affordable!

There was some interesting information on using different modes of entry (using a shift key) and whether that was a good idea or not, and if not how do you work around that?

It would be quite interesting to try to come up with a Phidget device to work on the problem... But we've already got one problem to work on!  

Tuesday
Oct302007

Reading on auditory systems

Well, to try and look at some of the questions I came up with on the sound design for blind users, I read a couple of papers.

First up was "Electronic Sensory Systems for the Visually Impaired" by B. Ando. Published by IEEE.

This had lots of good follow-up leads and overviews on existing handheld systems, and about hearing for the choice of which beep pitches to use etc.

Next I looked into the "Design of Auditory UI for Blind Users" by Hilko Donker, Palle Klante and Peter Gorny. Published by the ACM.

This one was les useful for me, but included some good information about the sound merging. Apparently it's important for the noises to be pleasant for the user, which suggests nice chords of abstract notes would be better than clashing sounds that might more accurately represent the objects we're looking for. Also apparently the human ear is better at distinguishing the position of sounds in a horizontal plane, rather than vertically.

The other interesting bit was how they tested it. They tried to get users to represent their understanding of the screen layout using a set of pins in a cork board, but the mental maps that the blind users produced were completely incomprehensible to them. When they tried the same thing with blindfolded sighted users the maps were more what they were expecting. That implies that  the way the blind users internalised the spatial awareness was very different to the sighted version, and that this isn't necessarily a good way to measure the success of the model. It's pretty interesting in terms of how to represent visual information to people who have no concept of many of the 'standard' visual cues.