hit counter

I'm currently working on a DPhil in HCT at the University of Sussex. This section of the website is for an on-going 'learning diary', for me to write my thoughts and notes on various courses and my thesis.


A methodology for reviewing academic papers

Given that this week's paper was generally voted to be a poor example of a paper we tried to come up with a way of reading that might help us to identify this more quickly.

  1. Start with the keywords. If they aren't familiar, it probably isn't the right paper.
  2. Move on to the abstract. Does it sound like it might be useful?
  3. Look at the number and quality of citations this paper has. If it has hardly any, chances are it hasn't set the world alight. Likewise if the only people who have referenced it are the authors, chances are it's not really had a large effect on other people. This isn't rigid - you might get something really useful from an little-known idea - but it could be a good indication.
  4. Then read the conclusions - this was quite an interesting idea. If the conclusions seem woolly and don't refer back to the earlier sections, will it be a bit like that throughout? Also, do the conclusions seem to match the abstract? If not, chances are this isn't a great paper.
  5. Look at the section headings. Again, do they appear to match your interpretation of the abstract. In the paper we reviewed, the abstract suggests that a series of methodologies will be discussed. Are there section headings reflecting those different methodologies?
  6. And when you've read it through, does the paper stand on its own as research? Or do you need to look up other references to understand chunks referred to in passing but central to the argument?
  7. What can you take away from this paper? Could you repeat the processes outlined? If not, it's probably bad.

The introduction should explain the problem space. So identify other similar research, explain what's out there, and where the hole is that this paper is trying to fill. The rest of the paper then goes about filling that hole. It's like a story. The same structure that applies to writing an academic paper can also apply to writing dissertations or theses.

It was suggested we look at other papers to judge a good paper, potentially from outside our field. So the ACM (for example) produce a 'most downloaded paper' chart. Clearly the top papers must have something going for them and would probably be worth a read. Likewise conferences (such as CHI or the British HCI conference) will award a 'Best Paper' for (yes, I know) the best paper at the conference. These would also be worth looking at. And the Journal of Usability Studies (JUS) have a set of submission guidelines that are also worth reading.

 Interestingly white papers are not reviewed. So are more of an opinion rather than a full academic paper and they tend to be commercial. They can still be useful. Need to have a look at whitepapers.com


Seminar 3 - the User Sensitive Inclusive Design paper discussed.

We finished the seminar earlier, and I wanted to get down my thoughts and stuff quickly while they are fresh. Controversial, I know.

We started out by discussing generally what we thought of the paper. Everyone pretty much agreed that it was rubbish. When we looked at what we were all expecting from the abstract, the paper didn't deliver. The structure was weak, the opinions were not backed up, and no decent conclusions were reached. This kind of spurred a discussion on how to read a paper and what might be in a good paper - I'm going to write that up in a separate post though so I can find it easily later. So yeah. Mr Newell and Mr Gregor (or Dr or Prof, whatever) - we didn't like it. Please do better next time.

Having said that, Yves and I still had to carry on and try to drive some kind of session based on it. We started with the following sentence: "He [Newell] introduced the concept of considering a 'user' as being defined by a point int he multi-dimensional space which specified their functionality, and the relationship of that functionality to the environment in which the user operated.". Like I said in my notes on the paper, we'd identified that some functionality was important to a blind person and a sighted person using a mobile, so I thought this would be quite an interesting thing to see if we could do this and identify that sort of cross-user functionality.

I'm not convinced it went well, exactly. It certainly sparked debate. Straight off we hit the problem of what does that mean? Should we draw a set of axes, then try to position each user on the chart? What should the axes represent? What do we mean by a user?

After stabbing at that for a bit, we tried to take it back a step and look at what we would get out of such a chart. Other than understanding that we were designing for a larger section of the populace than it may first seem, and perhaps allowing for cross-fertilisation of ideas (e.g. the T9 keyboard example...) even this wasn't entirely obvious.

We tried using impairment rather than disability, so we could 'group' levels of similar difficulty regardless of the cause. We considered using time to perform a task as a measure of impairment, but that was controversial. And how do you then go about breaking down your clusters of similarly impaired people so that you can understand the causes and amoliarate them? For example, if a sighted beginner and a blind expert take the same length of time to complete a task, one solution is probably not going to improve the situation for both users.

So we never quite got to the other 3 topics we'd picked on to consider. We did spend quite a long time thinking about users though, and it does suggest that if we spent an hour discussing this in circles and didn't come up with anything useful perhaps it deserved slightly more than a passing sentence in the paper.

With hindsight (always wonderful) perhaps I needed to spend less time on my search for Newell's 1993 paper, and more time thinking about other related things to look into. Looking at other systems of classifying users could have been interesting, and finding some different examples would have given us something to sequee to when we discovered this was actually really difficult. Possibly Yves and I should have realised when neither of us really had a good enough grasp of the concept to even put up a straw man we should have aborted and gone on to other things. I'm not sure.

I guess it all depends on your measure of success. We definitely sparked debate, but did we generate any useful 'take aways'? I'm just not sure.  


Links and references from User Sensitive paper

The Trace Center - American centre, based at the Engineering department of the University of Wisconsin-Madison: http://trace.wisc.edu/

Tiresias - Information resource for people working in the field of visual disabilities: http://www.tiresias.org/guidelines/index.htm .

Center for Universal Design - Part of North Carolina State University:  http://www.design.ncsu.edu/cud/

The Web accessibility guidelines: http://www.w3.org/WAI/



User Sensitive Inclusive Design

I volunteered to lead the first discussion seminar bit. The paper we're looking at is "'User Sensitive Inclusive Design' - in search of a new paradigm" by Alan F Newell and Peter Gregor, published in the "Proceedings on the 2000 Conference on Universal Usability".

Following on from Graham's advice on how to read a paper, I thought about what I was expecting after I'd read the abstract. I was expecting a discussion of several research models (I had to look up paradigm. Apparently it's something serving as an example or model of how things should be done, according to my little dictionary anyway...) followed by a discussion of a better system - possibly in the form of a 'straw man' to cause and inform debate. I was also thinking there may be some pointers to where further research/thought ay be needed to improve the final system.

The structure of the paper is that of a discussion paper, rather than an experimental paper as such. True to my expectation the paper starts discussing a couple of different systems. I found the discussion of the INCLUDE project and the Centre for Universal Design at North Carolina State (there was a link in the paper, but it's no longer valid - 7 years is a long time in internet years) quite damning and dismissive. I wasn't entirely sure what the point was, other than to demonstrate the need for an improvement. I think I need to do some research into what these two are or were like, to flesh out the view presented in the paper.

One thing I think it would be interesting to take forward into the seminar on Thursday is to examine Newell's idea of a user being represented on a chart of the functionality of the user and the environment in which the user was operating. We identified in the last session a common need between a blind person viewing a webpage with a screen reader, and a sighted person viewing one using a mobile phone. Both need to skip straight past all the normal headings and navigation straight to the content. Would this similar need be suggested on a chart of the type Newell is talking about? (Also the T9 keyboard was developed for disabled users and transferred to mobiles - because of their 'restricted' interface new methods of interacting have been identified, and the number of cross-overs between universal design and mobile design is interesting in itself.) Maybe we could take a single example and try to plot some different users and environments... Unfortunately I'm having trouble tracking down the paper referenced, because it was produced in 1993 and the IEEE subscription of the university only goes back to 1998.

The possibility of including disabled people as part of a User Centred Design (UCD) process is then discussed. This seems on first thought to be a sensible step. I mean, the whole point of UCD is to get a good understanding of your users, so just include the minority groups within the research. Some of the problems with doing so are then listed. To me they look a lot like the problems with working with children. That ties in with the earlier idea that ordinary users in extreme or different situations can have the same problems/needs as disabled users. It also suggests that this new design paradigm could have potential uses in a range of scenarios, and could lend greater commercial clout to the research. Again, this looks like an area that is pretty ripe for discussion. What other problems can we come up with, and can we suggest any solutions or workarounds?

One sentence I found particularly interesting was that "Newell deliberately ignored the views of users and clinicians". Well, what's the point of doing user-centred research if you're then going to play the part of genius designer anyway? Again, I can't find the papers referenced for that bit. I'm hoping that he rather focussed on the underlying needs demonstrated instead of what they said they needed, and not just completely ignored it.

There is a clear dislike of guidelines demonstrated throughout the paper. A part of the final description of what is needed from a new paradigm is for a method for disseminated the information and method that does not allow design by "mechanistically applying a set of 'design for all' guidelines". Need to consider what other methods to do this.

All in all this paper wasn't particularly satisfactory. The discussion of the research models was brief and didn't really describe the models at all. The reader needs to go and look these up for themselves, but the references given are no longer easy to find. The only section that was 'in-depth' was the discussion of the way research is currently conducted at Dundee. Even the promised new paradigm is actually mostly a justification of a change of name and some suggestions of what consideration needs to be given to formulating the approach. Perhaps I misinterpreted the abstract!  


Cognitive dissonance and other quick notes

Just had a bit of a chat with Lizzie about some of my ideas on my dissertation, and she thinks rather than group theory I need to look at sociology and something called 'cognitive dissonance'. So, I will! Later...