Remote user testing: interview of Javier Darriba

Citation: Javier Darriba; Mari-Carmen Marcos. "Interview of Javier Darriba of Remote user testing":"Hipertext.net, 11, 2013. http://www.hipertext.net


Remote user testing: interview of Javier Darriba

Perfomed in may 2013

Interviewed
Javier Darriba
https://twitter.com/Javierdarriba
http://www.userzoom.com/company/


Javier Darriba

Javier Darriba is CEO and founder of the Spanish consulting firm Xperience, focused on user experience, as well as co-founder and co-CEO of UserZoom, one of the market leaders in technology for remote user testing. Among other teaching activities, he is professor at the Máster Online de Documentación Digital (Master in Digital Documentation Idec-UPF)

In this written interview he presents us with the user testing technique known as remote user testing. He explains how the research on user experience is experimenting changes brought about by the irruption of new techniques related to the possibility of conducting user research outside laboratories.

These techniques are supported by software that allows conducting online usability tests, with the user interacting from home or the office so that big samples are tested and quantitative and qualitative data are combined. These techniques are growing supported by a methodological change in software development, an “agile development” which requires testing the developments in a fast, iterative way.

These techniques are not only restricted to online usability tests, but also include online card sorting, tree testing and some more. Darriba claims that online research with users is to become the preponderant research technique, particularly when considering what has happened in related sectors such as market research, where online surveys have superseded other kinds of traditional surveys in the ranking of the most common techniques employed by market researchers.

1. Can you define remote user tests?

Remote user tests are a powerful tool in online research based on the goal-oriented user or scenario analysis methodology, so that they allow testing usability and user experience through sophisticated remote user testing software.

2. Is research on user experience with remote tests a recent development?

If we look back ten years, we see how usability tests and user experience research in general have not changed very much, or have not changed at all. On a technical level maybe the sound has improved, [but] we keep using the same screen capture software and eye tracking innovations have been incremental, not revolutionary.

It has not been the same for other sectors such as online marketing, subjected to continuous shakeups that make its founding pillars tremble every two or three years. Something similar has also happened to traditional market research, where, according to the annual ESOMAR study from 2012, online market research represents a 22%, well above the 13% of telephone interviews, 13% of focus groups, or 11% of face-to-face interviews.

But it is understandable; user experience research is much newer than market research, so, in a way, it is normal that market research has been the first to change into online studies.

3. Are there data about the use of these remote tests by UX professionals?

In the salary survey conducted by the UPA (currently UXPA) in 2007, “non-moderated” or “automated” remote user tests did not appear as a technique. In 2009, 18% of the interviewees acknowledged the use of this technique during the previous year. In 2011, the total of UX researchers who claimed to have conducted non-moderated remote user tests was of the 23%, and this technique had the most significant growth in percentage terms. I have no doubt that next year’s survey will produce even more forceful data, because the software offering the possibility of conducting remote user tests has made the expansion of this methodology possible.

4. When conducting a remote test, is the UX professional in any way present on the other side?

It depends on whether the test is to be moderated or non-moderated. Moderated remote usability tests work exactly as laboratory tests. The only difference is that the moderator is in a different place from the user, so the moderator could be in Barcelona and the user in Madrid. Communication is conducted on the telephone, with Skype, GoToMeeting, Webex... so that some of this software visualizes the face of the user through the webcam and shares the screen, to observe the screen of the user. It is easy to deduct that this kind of technique (almost) does not differ, at a methodological level, from a usability test in a laboratory.

However, in non-moderated remote usability tests, the UX researcher actually follows the script of the test, goes into detail about the tasks the user has to complete and the questions to be asked and inserts them into the chosen testing software. Once inserted, the researcher sends the study for the users to complete it. And the users, as if it was an online questionnaire, answer the study without a moderator observing them. It is the software itself which moderates the study, showing the user the tasks to be completed, asking the questions before or after every task, and collecting information on navigation clicks, time… of the user. This kind of testing, known as unmoderated remote usability testing or automated remote usability testing, has been increasingly employed in user experience in the last years, partly due to the introduction of agile development methodologies in corporations.

5. There is much talk about agile development. What is it about? How does it affect UX professionals, and specifically remote tests?

According to Wikipedia: “Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams”. There are many agile development methods; most of them minimize risks by developing software in short lapses. The software developed in a time box is called an iteration, which is supposed to last between one and four weeks. Each iteration of the life cycle includes planning, requirements analysis, design, codification, revision and documentation.

This kind of development methodologies requires fast, flexible and agile forms of usability evaluation. Usability tests in the laboratory normally require around 3 weeks to be conducted, including the preparation of the test script and the screen capture script, recruitment, test moderation and analysis. Such a long evaluation period does not fit in with these development methodologies.
The concept of “guerrilla test” fits in more with non-moderated remote usability tests, wherein time is significantly reduced in the phases of recruitment, moderation and analysis.

6. Imagine we are to conduct one of those tests, which steps should we follow?

First, a script of the test is drafted, as with user tests in a classic laboratory setting. Unlike laboratory user tests, between 4 and 5 tasks are drafted. Likewise, the user can answer questionnaires before and after each task, and questionnaires at the beginning and the end of the study.

This script is inserted into a platform that is to monitor the user test. Users can be recruited from a user panel as those using traditional market researchers, invited from a database of clients or intercepted in the website we want to test.

Depending on the type of software used, users might have to download a plug-in to their computer. Once the users agree to participate in the study, a bar shows in the lower part of the browser indicating what they have to do. While users perform the tasks, this software collects the time, number of clicks, paths and success ratio of the users, and sometimes is even able to replay the user session.

Finally, remote user testing software has tools to analyze the data, segment it by profiles and export the data to tools allowing statistical analysis of the collected information.

7. Which results can be expected from remote user tests? 

Once a study of remote user testing is finished, we obtain rich information about the user experience before the tested interface. Let us categorize the information into three groups:

Efficacy: Does the user fulfill the online goals? Is the user able to perform the tasks? Does the user find the information he/she has been looking for? Is the user able to sign up and buy? Etc.

Efficiency: How many clicks, time and effort are needed to fulfill a task? What kind of effort demands accessing our corporate Intranet and downloading documents? How long does it take to learn to use a processing system for online shopping (e-procurement)? Etc.

Satisfaction: What do users think of our website? Do they like the graphics, images and colors of our website? At the end of their navigation, are they really satisfied? Would they come back happily? What could we do to please them more? Etc.

  • Some of the results we might obtain for each of these tasks are:Video replay of user interaction during the task
  • Success and abandonment ratio when performing every task
  • Time and number of clicks needed to perform the task
  • Paths followed by the users to fulfill their goal
  • Reasons for abandonment
  • Pages and place where abandonment took place
  • Answer to questionnaires before and after the task
  • Analysis of each and every page the users have visited when performing a specific task
  • Users behavior for every specific page
  • Page composition: weight of the page and its elements
  • Time staying in the page and downloading it
  • Etc.

8. Can you give us some examples of how these results are presented?

Yes, next I am going to provide some explanation and images on how the UserZoom tool does it:

Success ratios: This chart (figure 1) shows the percentage of users who have managed to perform a task, and the percentage that have not. Among those who have not succeed, there is a distinction between those abandoning the task because they do not know how to get to the information they are asked to find and those of the mistake, meaning those users who believe they have found the right information, but after validating their answer and the page they have reached, it is revealed that have not known how to get to that page.

Eficiencia y Eficacia.jpg

Figure 1. Success ratios

Navigation paths: Normally, two kinds of paths are shown, although they can be filtered by all the variables the researchers want. Usually, it shows the path of the users who have successfully completed the task, and that of those who have not. Next to the arrow indicating where the user has gone, the percentage of users who have gone to one or another page is included. UserZoom also has a tool that allows “humanizing URLS” and making them more intelligible to the users.

Clickmapping: Figure 2 shows where users click. It is very useful to graphically show the different paths the users take when starting a task.

clickstream.jpg

Figure 2. Clickmapping

Online questionnaires: At the beginning of the test, at the end of the test and before and after each task, questions will be asked to the users depending on what they have done. The following image (figure 3) shows the classic final assessment question of the website.

NPS.jpg

Figure 3. Questionnaire

Finally we show a screenshot of the page where the videos of the users performing the task in UserZoom are visualized (figure 4).

Videos.jpg

Figure 5. Videos of the users in UserZoom

9. Besides tests, are there other types of user research which are also conducted remotely?

Indeed. Actually, when talking about Online UX Research or remote user experience research not only remote user tests are taken into account, but also other techniques. I am going to describe them next, grouped in relation to the moment when they tend to be used within the development process of a website or software.

A) During the conceptualization phase, True intents (Voice of Customer) tests or intention tests are conducted to know the visiting user of a website, to know the purpose of his/her visit and to know whether he/she has fulfilled the purpose for visiting. Information is obtained regarding the goals of the visit (why does he/she come to the website?), who is he/she? (a social demographic profile) and whether the user managed to find the information he/she was looking for. This study is focused on users who spontaneously accessed the website. The sample of this kind of study is of 500 participants on average. The data obtained in this kind of study answer the following questions: what is the purpose of the users who access my website? Do they manage to perform the task they were coming for? In case of trouble, what kind of trouble have they encountered? What is their perception after using the website?

B) During the information architecture and prototyping phase, there are several techniques that now I am going to present.

Card sorting is an exercise where the user divides and classifies a set of concepts into different groups. The goal is to study the way in which users organize information, grouping those concepts and creating associations among them. Later, coincidences among users are statistically analyzed to create categories which are as closest as possible to the mental model of the users.

On the other hand, in tree testing (or reverse card sorting), the participant must look for a category in a hierarchical content structure (figures 5 and 6).

Tree Testing 1.jpg

Figura 5. Tree testing, list of terms

Tree Testing 2.jpg

Figure 6. Tree testing, results

In the third place, screenshot click testing is a kind of test on a “under construction” website, which might be a high fidelity prototype or a PSD image (for example). The user is offered a mini-task consisting of clicking on an element in the page. The user has to observe the page and decide where to click. The software collects the clicks of the user in the page, and shows a map of clicks of the users (figure 7).

Clickmapping.jpg

Figure 7. Map of clicks

Finally, there is the technique of user testing (in laboratories or remotely) on prototypes, which is like usability testing in laboratories or remotely. Its only peculiarity is it is not conducted on working websites, but on prototypes made with Axure, Balsamiq, Justinmind, Irise or another prototyping tool.

C) During the launch or optimization phase, the previously described remote tests should be introduced, as well as some techniques presented in the previous phase. Likewise, in this phase we should include A/B testing and multivariate tests which belong to the area of web analytics and web optimization.

10. Could you summarize the advantages of using remote techniques and particularly remote testing in UX?

Remote user tests enrich user experience research by providing a kind of information we previously could not obtain in the laboratory.

It is a technique sustained by advanced software that has many evolution possibilities and will slowly gain ground on traditional user research and more qualitative techniques, so that they remain for those research aspects where face-to-face meeting with the user is unavoidable, as happens in ethnographic studies.

Few people dare to dispute that remote user tests will overcome laboratory user tests, the question is when that surpass is going to happen.

Could you recommend some readings on the subject?

Of course, these are the ones I recommend:

  1. Nate Bolt, Tony Tulathimutte. Remote Research. Rosenfeld Media, 2010.
  2. Robert Schumacher (Ed.). Handbook of Global User research. Morgan Kaufmann Publishers, 2010.
  3. Tom Tullis, William Albert and Dona Tedesco. Beyond the Usability Lab: Conducting Large-scale Online User Experience Studies. Morgan Kaufmann Publishers, 2010.
  4. Tom Tullis and William Albert. Measuring the user experience: Collecting, Analyzing, and Presenting Usability Metrics (Interactive Technologies). Morgan Kaufmann Publishers, 2008.
Last updated 22-05-2013
© Universitat Pompeu Fabra, Barcelona