1

The Value of End-User Surveys in Testing Landing-Page Usability

Used with permission from Mallika Yelandur – Kumar Dhanagopal

Organizations use websites for a variety of purposes: to sell products and services, to get contact information about potential customers, to convince users to subscribe to newsfeeds, and so on. Regardless of the purpose, most websites have a page that is designed to be the starting point for users—the landing page.

With the advent of powerful search engines, users reach specific pages within a website directly, bypassing the landing page. On many websites, users are guided to the page they need through a sequence of prompts designed to progressively filter the available information on the website. All of these technological advances have changed the role of the landing page from being “the” starting point for a website to just one of several starting points for users to find information. Nevertheless, the landing page continues to play a vital role in meeting the needs of the organization and its audience. Like the ToC of a book, the landing page tells users what the site contains, helps users locate the information they need, informs users about the hierarchy of information on the site, and, most importantly, can help convince users to stay on the site!

How do we test the usability of a landing page—that is, whether its meets the needs of the organization and the audience?

Usability Testing Methods

A usability test provides data about users’ experiences and reactions to specific design elements of the website. Broadly, website usability tests can be classified into two groups:

  • Tests that rely on indirect user-experience input such as click-stream data and eye-tracking data—indirect because such tests use the results (clicks, navigation path, and so on) to interpret the possible user experience that resulted in the data.
  • Tests that obtain direct feedback from users. This group of tests has the potential to provide reliable, qualitative usability input about user experience. Collecting data through online feedback forms, videotaping users in a lab environment, and conducting end-user surveys are a few examples of methods that provide direct data.

Advantages of End-User Surveys

An end-user survey that is planned and executed well has the following advantages:

  • Provides an opportunity to choose test participants

    The survey administrator has the freedom to pick participants for the test, ensuring that the results of the survey are relevant to its purpose. In addition, usability analysts can, while analyzing the survey results, apply what they know about specific participants to ‘read between the lines’ and get more contextual meaning out of the user feedback. In other methods of collecting user-experience data, the contextual information about the user is almost nonexistent.

  • Allows focused testing on specific areas

    The survey administrator can design the survey questions to focus on specific areas of usability, depending on the needs of the organization and perceived problem areas.

  • Enables meaningful and relevant feedback

    Participants in end-user surveys often know how the information they provide will be used. They have the opportunity to think about their experiences before responding to the survey questions. As a result, responses to end-user surveys are likely to be well considered and balanced, when compared with responses from other usability tests.

  • Ensures wide coverage of usability issues

    It is virtually impossible to come up with a perfect test that covers all the possible user experience areas. Most end-user surveys encourage participants to provide feedback on areas not covered by the questions in the survey, usually via a free-form text field in the questionnaire. This ensures that end users get an opportunity to share vital input, which, in other forms of testing might be suppressed, inadvertently or otherwise.

Disadvantages of End-User Surveys

Many of the advantages of end-user surveys discussed in the previous section can turn out to be double-edged swords.

  • Sampling bias in the selection of test participants

    While end-user surveys certainly provide us the opportunity to select survey participants, this ability, if not used wisely, can render the survey unscientific.

    One of challenges in usability testing is maintaining objectivity through all the stages of the testing process: test design, execution, and results analysis. In a test that relies on indirect data (for example, click-stream data), most of the subjectivity creeps in during the results analysis stage. In a test that relies on direct data (for example, results from an end-user survey), subjectivity could be injected right at the start of the process when the survey participants are selected. In this sense, the results of end-user surveys are as subjective as the results obtained through indirect data gathering methods.

  • Inconsistent quality of responses to open-ended questions

    The quality of responses to open-ended questions cannot be expected to be consistent across all the survey participants. The usability analyst needs to ‘normalize’ the survey results to account for variables such as the writing skills of the survey participants. At times, poor word choice by the survey participant can change the intended meaning of the feedback. In contrast, click-stream data and the results of eye tracking tests are of consistent quality.

  • Subjectivity in designing test questions

    As with selecting survey participants, things can go wrong while selecting survey questions. Designers could deliberately influence the outcome of the survey by including specific questions, providing specific answer choices, and choosing specific words to ask the questions.

  • Likelihood of low response rate

    Because end-user surveys are conducted after the user-experience event, their success depends on the willingness and desire of the participants to provide feedback. Despite the best efforts of the survey administrator to motivate participants to answer the survey questions, the response rate to an end-user survey might be poor, rendering the effort a waste.

  • Not real time

    Direct user-experience data such as that generated from click-stream monitoring is recorded in real time while the user is actually experiencing the website. When users answer survey questions, they are likely to be significantly removed (time- and space-wise) from their actual user-experience situation. Consequently, answers to survey questions might not reflect real user experience.

When to Use End-User Surveys

  • End-user surveys are appropriate when it is easy to identify a representative sample of users. For example, when a company that has recently deployed a new customer-relationship management application wants to find out how the sales force in the field finds the experience, it would be relatively easy for the company to pick survey participants from among its employees.
  • An end-user survey is perhaps the best method for usability testing when the user population is limited. For example, when a department within a company decides to redesign its internal wiki page, the primary user group is limited to employees within the department.
  • In certain cases, quantitative ‘handles’ such as click-stream data can be of limited value. For example, when the technical support department in a company wants to find out the effectiveness of the company’s customer support site on the Internet, it cannot rely solely on click-stream data, because that data does not necessarily indicate whether customers have been able to find the required solutions. In such cases, end-user surveys are more useful.
  • When the need of the hour is quick decision making, an end-user survey can be the answer because it is easy to design and administer a questionnaire. Most organizations have mechanisms through which they can shortlist survey participants (say, customers) and reach them. In contrast, setting up an elaborate eye-tracking test or tuning the existing web analytics infrastructure to capture data from specific customers and for specific purposes can be a time-consuming exercise.


Suggested reading:
Kaushik, Avinash (2007). Web Analytics: An Hour a Day. Indianapolis: Wiley Publishing, Inc.
Nielsen, Jakob (2009). Top 10 Information Architecture Mistakes. Alertbox.

About the author:
Kumar works as a doc project lead at BEA Systems India, an Oracle company. He has been in the profession for 10 years. When he gets time after work and family commitments, he loves to dabble in carpentry. He is currently pursuing an MS TechComm (online) degree from the Utah State University.

About the illustration:

The image is used with permission from Mallika Yelandur.

One Comment

Comments are closed.