Seminar 8 - Agents or manipulation?
Sunday, November 25, 2007 at 12:48
martian77 in HCCS Adv

This was set up as a fight between using software agents vs. direct manipulation, based on a paper from a debate between Ben Schneiderman (for direct manipulation) and Pattie Maes for agents. I didn't even like the paper. I thought Schneiderman came across as demeaning Pattie Maes rather than actually debating his corner, and didn't respond to many (any?) of the points she raised.

So, taking it away from them.

Software agents can be done really badly (like Microsoft's paperclip - the standard illustration of one). There's a paper from the BT laboratories in 1996 (apparently printed in the Knowledge Engineering Review, vol 11, no 3, written by Hyacinth S. Nwana) that I found over here that tries to tie together a lot of the research at that time into agents. One of the key bits for me was that they couldn't define a consistent definition of what an agent is. The idea is that the software does things in the background to remove the effort from you. So something that helps you search by learning what sites interest you and suggesting things in a similar vein.

At some level then maybe a page like iGoogle or netvibes or bloglines, where you collect a set of things for it to 'watch' on your behalf and immediately update could be considered to be a really simple agent - it does take the work away from following feeds, because you only have to look at them when there's something new. But a cleverer system could be to take the feeds you look at, then scan for other feeds that reference them. You can do this yourself by looking at the trackbacks on posts, or search on the content I guess.

Direct manipulation is just that. Everything that needs doing, you do yourself. So rather than let Clippy set up your letter, you decide where you want the address and do the formatting and so on.

I don't think these are necessarily opposing technologies. I think there's a time and a place for both, and actually they could work really well together. If I'm searching for something specific, it might be handy to have a list of links relating to previous stuff and the current search. But don't not do the current search (direct manipulation) in favour of only returning the suggested links. That's self-reinforcing. On the feeds thing, sure, highlight some extra stuff I might find interesting, but don't stop following my selected feeds just because I haven't been particularly interested for a day or so.

As for which technology is better for usability, that's pretty tough. Any system you need to train before it becomes properly useful. It's interesting to me that most blind users don't surf the web for fun, but with a specific purpose. Would they be more likely to surf for fun if the process was fun for them? So if an agent learnt some of their preferences and found relevant other pages quickly and easily for them, would that encourage them to explore further? On the other hand, that kind of learning and altering of what's available could make an interface much less predictable and therefore harder to use, especially if you rely on 'learning' the layout of the screen.

I need to do some more reading on this for the exam, but I'm struggling with my searches. I think that's a side-effect of the word agent now applying to too many things. Looks like Pattie Maes wrote an extremely cited piece in 1994 - "Agents that reduce work and information overload". On the ACM. Maybe I'll have a look at some of the papers that cite it, and read it myself.

Article originally appeared on Life on Mars (http://www.martiandaze.net/).
See website for complete article licensing information.