Thursday, January 25, 2007

Experiences shared

I asked my friend who has experiences of usbility testing in a company last summer. She shared with me about how they did testing. Because she was asked to promise never to share data, reports, and ideas with anyone, I only got the information of the overall process.

Steps:
1) test plan
2) protocol (script for moderator)
3) set up software and hardware for recording
4) go through the test with user
5) observe and take notes
6) edit the videos
7) write the report
8) present to designers and other stakeholders
9) put document and video on intranet

How to sample?
Rule of thumb: target audience of the system/product/website
Target audience might be different in age, gender, experience, etc. To avoid bias, don't recruit your subjects from only one category.

Number of participants?

5 will be enough for general usability test; 15 for card sorting tests.

The company my friend worked for has its own customer database, so they can always recruit people from there via emails. They need to take many criteria (age, location, status, etc.) into account by having screening questions to filter out unqualified people. But they only had 2 weeks for recruitment, which is too short, hence people responding and participating were not as representative as they were supposed to be.

Tuesday, January 23, 2007

The data sources, research question & theories of the project

A list of possible data sources
Best possible source:
  • Contact with the usability group in Carnegie Museum of Art to request for the previous research data (or documents), web log records, or user study reports.
  • Contact with Mr. Real who is the staff conducting the website in Carnegie Museum of Art. I can get information about collection backgrounds and system architecture from him. The information will help me design my questionnaire, interview protocol and testing tasks.
Sub-optimal source:
If there is no “oven ready” data (or not appropriate/adequate to fit my goal) in Carnegie Museum of Art, I need to collect data directly from participants. The testing will be laboratory-based and the data include:
  • The results of observation
  • Real-time issues captured by observers during the testing sessions (take notes)
  • Transcripts of interviews
  • Questionnaires
  • Video and audio recording (digital files)
  • The conversation of focus group meeting
  • Cursor activities recorded by tracking software
Worst case:
If the above data sources are not available, the only way I may do is to synthesize findings and analyze the issues from the previous literature. And maybe doing comparison of the content and design with other museum websites with similar functions and features. I hope the worst case won’t be happened because I want to use user-centered methods in my study.

A summary of possible relevant theories
There is little research related to usability of museum website and most of the theories focus on the planning and design. Fortunately, the Carnegie Museum website can be defined to be one type of digital libraries by its nature and characteristics (I will elaborate it in my future paper). The usability evaluation of digital libraries has already evolved for about two decades. There are a few studies or concepts which shape my understanding of this topic so far. The following paragraphs are the summaries for them.

T. Saracevic (2000) suggests a conceptual framework for digital library evaluation which must involve selection and decisions related to:
  1. Construct that was evaluated: What was evaluated? What was actually meant by a “digital library”? What elements (components, parts, processes, etc.) were involved in evaluation?
  2. Context in which evaluation was conducted: selection of a goal, framework, viewpoint or level(s) of evaluation. What was the basic approach or perspective? What was the level of evaluation? What was the objective(s)?
  3. Criteria that were chosen as a basis for evaluation: What parameters of performance were concentrate on? What dimension or characteristic were evaluate?
  4. Measures reflecting selected criteria to record the performance. What specific measure(s) to use for a given criterion?
  5. Methodologies that were used: for doing evaluation. What measures and measuring instruments were used? What samples? What procedures were used for data collection and for data analysis?
A clear specification on each of these is a requirement for any evaluation of digital libraries (Saracevic, 2000). My project will follow the lines of these five aspects as the conceptual framework for usability testing.

Judy H. Jeng’s dissertation (2006) is the recent effort on looking for a better way to evaluate digital library. She created an evaluation model and, in the end of the study, she generalizes the model, instruments, and methods for use in academic digital libraries. I expect my study can make further efforts based on her research and reexamine the validity of her model and generalization by applying it to the Carnegie Museum of Art Website. In addition, I also expect to find out new issues and variables not covered or confirmed by previous studies.

M.R. Patel (2003) examined the viability of a Category-Based Usability Theory, which indicates that usability of websites should be accounted for on the basis of the category the website is in. While Web design experts have provided general design guidelines, it is believed that with different site types, design guidelines may differ. In his study, Patel verified the hypothesis that “if the conceptual model of the user is to be observed then true usability must come from analyzing current practices of Web design of the most popular websites within a category or field, and not merely by following recommendations from Web usability experts.” My study will support this notion, and carefully decide the scope of generalization.

At last, one of the important relevant concepts is user-centered theory. Principally, user-centered theory argues for the user as an integral, participatory force in the process. Users are encouraged and invited to have a say, in other words, and thus they are physically or discursively present in the decision-making process... (Johnson, 1998, p. 30-32). My study support user-centered theory and adopt user-centered approach to carry out the testing.

A Workable Research Question
This research project is going to carry out a usability testing for Carnegie Museum of Art (CMOA) Website. Through the testing, the study could provide re-design recommendations to Carnegie Museum of Art Website in the end and re-examine the internal and external validity of usability evaluation model designed by J. Jeng in applying it to test CMOA Website. The study also interests the relationships among variables (effectiveness, efficiency, satisfaction and other unexpected variables) which influence user experiences.

Reference
  1. Johnson, Robert R. (1998). User-Centered Technology, a Rhetorical Theory for Computers and Other Mundane Artifacts. Albany, NY: State University of New York Press.
  2. Jeng, J. H. (2006). Usability of the digital library: An evaluation model. Unpublished doctoral dissertation. Rutgers, The State University of New Jersey, New Brunswick, New Jercey.
  3. Saracevic, T. (2000). Digital library evaluation: Toward an evolution of concepts. Library Trends, 49 (2), 350-69.