A few years ago, I worked as a usability specialist at a large technology and services company based in Bangalore. The design unit at this company comprised a large team of nearly a 100 people – mainly interaction designers, but also several visual designers, information architects and technical communicators.
At the time I worked there, several new systems and best practices were being put into place. It was also the time a small usability team had been formed, and one of the mandates of this team was to inculcate a culture of iterative design and systematic design feedback among the designers. The idea behind doing this was to ensure a ‘basic’ quality standard on all User Experience (UX) projects going out of the design unit. A second agenda was to improve skill sets and design sensitivity of junior designers / designers new to the team.
One of the ways in which we set out to do this was through a peer-peer design review system – Essentially a modified Expert Review practice designed to suit the specific context:
For the system to be successful, it was imperative that the reviews were objective. (Personal opinions / pointing out issues that could not be justified convincingly would become the recipe to disaster! After working hard and passionately on a design, the last thing a designer would want to hear is baseless or subjective criticism.)
It was also important for the process to be quick and practical (Issue identification, iteration and dialog oriented vs. presentation focussed) since the design teams were typically hard pressed for time and often managing multiple projects and deadlines simultaneously.
Initially review teams were made up of 3-4 interaction designers ranging in seniority per team.
The system having demonstrated a perceptible improvement in quality of work, we decided to experiment by introducing a diverse perspective into the review team.
So in addition to the designers on each review team, a technical communicator (TC) was also included.
The reasons behind choosing a TC to contribute to the review team were many:
- Focus on User EXPERIENCE vs. User INTERFACE – Content is a big part of the overall experience – Both in terms of stand-alone or complimentary documents and systems + content that is closely integrated with and into the main interface.
- Technical writers were not coming in cold – To start with, they would already have a basic sensitivity and understanding of usability issues since many of them worked closely with design teams and often had to include workarounds and explanations for poorly designed systems into their documentation.
- They were already familiar with the system and practice of ‘peer-review’ in context to documentation. Several TC guidelines and heuristics are available online and used in peer-peer documentation reviews. For Example:
Some valuable lessons emerged out of this experiment:
- Starting with a simple checklist based Heuristic Review (vs. an Expert Review which can include multiple heuristic checklists as well as countless design principles, guidelines and best practices) proved to be a more systematic way of introducing TC’s to the usability review process. Expert review skills can only be built up / internalized through years of hands-on experience.
- In addition to these fixed usability heuristics, it was most productive for TC reviewers to focus on labelling, wording and content related issues initially, rather than attempt to immediately start reviewing UI aspects like navigation, layout or visual design.
- A verbal discussion / presentation of review findings proved very useful to both the designer whose work was being reviewed as well as to the reviewers:
- Having a dialog around the issues identified through review helped designers understand the issues more quickly and served as a useful way to get buy in from the designers.
- Sensitivity and understanding of unfamiliar issue types (UI specific issues for the TC reviewers and content related issues for the UI reviewers) could be developed over time, by listening to the discussions around the different issues identified by designers and TC’s in context to the same application.
- Hearing and seeing issues through discussion helped get new / below average reviewers up to speed.
- TC’s with previous experience working with design teams / a basic understanding or previous application of generic UI guidelines performed better than TC’s who were fresher’s or even more experienced TC’s who had restricted themselves to usability guidelines specific to writing.
An interesting observation was the difference in perspective between a TC and a designer in identifying issues.
Even prior to introduction of TC’s into the review team, the UI designers were required to identify issues in content and wording as part of their overall usability review.
However, when compared, there was a distinct tendency for designers to focus on and be able to catch a certain type of content issue, as compared to the type of content issues a TC intuitively highlighted.
For example, issues related to ‘Readability’ (Font type, size) were a common issue identified by UI designers.
On the other hand, TC’s typically highlighted issues like ‘usage of active vs. passive voice’ or ‘usage of consistent writing style through the system’.
On the whole, the experiment of introducing TC’s into review teams helped to demonstrate that a designer-TC review team works well and is beneficial to both professions:
- TC’s experience and aptitude for clear and concise writing helped to communicate more clearly, compellingly and quickly, the issues and recommendations the designer identified. (The impact or implication of issues identified by designers was sometimes lost because of poor written communication skills)
- Discussions with the designers around how a wording issue impacts UI / UI context and usage helped TC’s to gain additional insights into the nuances and feasibility of fixing a wording issue within a larger UI context.
- Getting involved in the process earlier on helped the TC’s to weed out issues that they might otherwise have only been able to address as a work around / explanation in their help documentation.
In context to the emerging trend of closer integration of help into the main interface design (vs. a separate help document), a better understanding of UI design and awareness and application of usability methods is a skill that would be beneficial to the TC community.
Training TC’s to conduct heuristic reviews (and over time expert usability reviews) appears to be a good way towards doing this.
Related Reading:
An article titled ‘Overlap, Influence, Intertwining: The Interplay of UX and Technical Communication’ from the Journal of Usability Studies (JUS) explores the relationship between the two disciplines and questions roles that people with technical communication training have (and could have) within user experience.
About the Author
Devika is the founder and principal researcher at Anagram Research and conducts usability and research studies in the areas of personal computing, mobile devices, messaging and Internet experiences. Prior to this, she worked at varied technology and research environments like Yahoo, Infosys and Human Factors International.
I have few comments with respect to the usability reviews and with regards to involvement of Technical communicators.I have worked in Software services company earlier. The reviews were quantitively focused and were for selfish purposes but the actual focus on the quality of the websites or web applications is not of priority. Just imagine a group of 3 or 4 people spending an hour and providing review comments on the application. Whether it is information architect or visual designer or technical writer, one would need a good amount of time for understanding the application. Context will be missing with the checklists. One has to be very careful in recommending this type of reviews as these are quantitative. There were many instances where the clients were firing back with these review comments. I would suggest the design groups to be extra careful in setting up these practices in their organizations.
Hi Kavitha – Usability Reviews (and Expert Reviews) are typically qualitative, although severity of the issues identified can be rated. Here’s an online article on the same FYI – http://www.useit.com/papers/heuristic/heuristic_evaluation.html
Reviews are also usually not checklist based. (Although various guideline / principles / heuristics may be referred to or referenced in the report in order to validate the issues identified by reviewers.)
Reviews are typically done over a couple of days –depending on the complexity of what is being reviewed. (The knowledge transfer for a complex application itself can take an hour!)
Besides needing time to understand the application and time to review it, it is important that the reviewers are trained in conducting reviews, to avoid invalid / subjective issue identification.
Not sure what you mean when you say the reviews you did were for selfish purposes. The idea of doing a review is to flush out usability issues related to various aspects of the UI (like navigation, IA, visual design, wording etc.) – So the objective is definitely to focus on the quality / improving the ease of use of the application.
Hi Devika, The point here is not about the purpose of the Usability reviews and everyone in the field of User Experience design are aware of Heuristic Evaluation. The point is about the quality of these reviews. Companies have to ensure that the process implemented gives genuine results instead of focusing on ‘number of reviews done..’ , one person claiming the number of reviews done etc.
The whole idea of these reviews are to identify usability problems and in the way to get this ensure proper people are placed, people have good amount of time to do reviews etc.