Trustiness in costumer/user chat services: the importance of design

Author: Víctor Arrazola (Universitat de Barcelona), Silvia Herrera (Universitat de Barcelona), Bastien Mothais (Universitat de Barcelona), Mari-Carmen Marcos (Universitat Pompeu Fabra)

Citation: Arrazola, Víctor et al. (2013). "Trustiness in costumer/user chat services: the importance of design". Hipertext.net, 11, http://www.upf.edu/hipertextnet/en/numero_11/trustiness_user_chat.html

Víctor ArrazolaSilvia HerreraBastien MothaisMari Carmen_Marcos

Abstract: The usage of web-based live chat help systems is particularly on the rise among e-commerce businesses. But its wide use is in stark contrast to the lack of analytic studies on how the users interact with this kind of systems. This article analyzes the user's awareness of the presence of live chat help systems and the impact that the live chat access banner's design has on the users' behavior. To that end, a usability test with 35 participants has been carried out, as well as two satisfaction surveys and a final questionnaire. The results of the study show that the chat system is satisfactory to the vast majority of users and that, of all the possible access banner designs, the one showing a human avatar proved to be the most successful in getting the participants to use the live chat to solve their doubts. 

Keywords: avatar, chat, e-commerce, icons, library, remote assistant, trustiness, user experience, UX.

 

Table of contents:

1. Introduction
2. Previous studies
3. Methodology
4. Results
5. Conclusions
Acknowledgements
References

 

1. Introduction

Custom or user service is a relatively common feature in websites where receiving immediate help might substantially improve the service, as indicated in a study published by Forrester (Clarkson et al. 2010): 44% of e-commerce users think that having an online shopping assistant is good service. In e-commerce sites, orientating the client in the purchase might represent a sell that otherwise would be lost. In other environments such as intranets, it facilitates communication among colleagues, since they are able to chat within the work environment itself (companies like PhoneHouse and Softonic are using them for this purpose) or to attend to other people, as in the universities that have implemented chats to attend to the community as part of their student services (such as Toulouse university). Also, some libraries use chats as part of a reference service, for instance those participating in the Spanish “Responde” service (http://www.responde.es) (figure 1).

Servicio

Figure 1. “Responde” service from the public Spanish libraries, available both through e-mail and chat (ask online)

The proliferation of these services also brings about more business offering online chat programs, as these examples illustrate: http://socialcompare.com/en/comparison/compare-live-chat-support-software-help

Despite most web users are familiarized with chats, they do not seem to be their first option for obtaining information in a website. Generally speaking, we users start by looking through the website, by navigating it. If we do not find what we are looking for, we use the search engine of the site expecting it will lead us to the place where information is, and only after that we turn to the contact section and ask for help. Remote assistants are another contact form, but instead of being asynchronous (as the email is) they are synchronous. It is true that neither all websites have a chat service for their users, nor all websites need to implement it, but chat services are interesting and useful for those websites with lots of information, those with complex information or those wherein the user needs information with a certain personalization.

When a website implements a chat service, it is expecting the users to use it, so the chat access button is placed on a visible spot, maybe with a scroll so that it always remains visible; it might be in all the pages of the website or maybe only in those where a consultation is needed; it might have a discreet design or a striking design, depending on the website aesthetics. Chat access not only has to be visible when needed, but it also has to convey trustiness to the user. If the service is not intuitive enough to show it is a chat with a remote assistant behind, or if it is unable to generate trustiness so that the user uses it, the user might leave the page without receiving an answer. Then, luckily she/he might end up sending an email or calling customer/user service, or might end up in the competence’s website, in the case of e-commerce sites.

This article studies users’ perception regarding online chat services, and the convenience of using a type of design and not others. Specifically, three aspects of that button or banner are analyzed: a purely textual design, another one that adds the corporate image of the company to the text, and a third one that includes the picture of a person as an avatar besides the text. Our study is based on the hypothesis that the design is going to affect the perceived trustiness of the users using chats when navigating the website, and that the presence of a person (a picture) will generate better trustiness than the other options considered. To obtain an answer to our question, we have counted on the participation of 35 people who undertook a test with several tasks, besides two surveys and an interview.

The next section goes through previous studies published on remote assistants via chat; section 3 presents the methodology used for the study; section 4 shows the results obtained, and finally this article provides with some conclusions.

2. Previous studies

Despite chat assistants are quite habitual in several environments, both commercial and institutional ones, studies on user interaction are very scarce, and most of them do not refer to chats with real, human assistants but with virtual assistants simulating dialogue and trying to answer the user by searching on their databases.

As a reference for the present article we have the works by Âberg and Shahmehri (1999, 2000), who conducted an analysis and usability study of the assistants for websites. In their research, which dates back to more than a decade ago, it is concluded that users are enthusiastic about the idea of someone orienting them in their navigation during a purchase process in an e-commerce website.

The chat specifically applied to attend to users in libraries is a much extended service and the IFLA mentions it in their article (2006) on recommendations (sections 2.3 and 2.4). Merlo (2005) presents a list of libraries that offered that option in that same year.

Wells et al (2003) study the placement of the button to access the chat. They observe that this button tends to be in the homepage of the library, or there is a link of the “ask your librarian” kind that takes users to the chat service. When the access button is found on more pages within the site, the use of the service has increased.

Some authors have thoroughly analyzed chat services. For instance telling their experiences, as in the libraries of Michigan State University (Clay et al, 2010), or trying to explain why some of these services disappear (Radford and Kern, 2006), the reasons being lack of funding, technical problems or trouble reaching an agreement among the consortium libraries offering the chat service.
After reviewing the related literature published during the last decade, we have not found previous studies analyzing user perception of chat services and particularly focused on the design of the banner that gives access to the service.

3. Methodology

To conduct the study, it was established that user testing would be the most proper technique to confirm or deny the initial hypothesis about chat buttons. The test intended to observe the behaviour of the users in a website that offers a user chat service, to find out whether users turned to that service when disorientated or when they did not find what they were looking for, and to know more about their perception on these services.

3.1. Selection of the object of study: banner designs to access the chat

According to what has been observed in several e-commerce businesses, libraries and others, we have concluded there are different ways of accessing chat services. In some cases, a link simply shows on a sentence that explains a consultation can be made by clicking on it, and sometimes, at some hours, there is a chat available. Some other times, there is only a form to ask for information and to receive it by email. The most common feature is access being made through a button or banner. These three are the most frequent chat banner designs (figure 2):

  1. Text-only
  2. Corporate image or icon + text. Includes simple forms with a corporate icon or image.
  3. Avatar + text. The avatar can be the picture of a person or the drawing of a person.

Ejemplos de diseños de banners de los tres tipos: sólo texto, texto+imagen corporativa, y texto + avatar humano

Figure 2. Samples of banner design of the three types: text only, text + corporate image, and text + human avatar

Banner design tends to go with the personality of the brand. Brands with more sober designs tend to chose banners without graphics, using typography only and several variations on it to make it more attractive. Others include their logo or some other distinctive element of the company. And some others use banners incorporating an avatar into their design, often a picture of someone willing to attend the client/user, to reinforce the message that there is someone behind to help you.

Generally speaking, we cannot claim one type of banner design is more habitual in some sector than in others. Mostly, the decision belongs to the business strategy and marketing every company or sector is implementing.

All three types of banners mentioned were used for this study: a text-only banner, one with a drawing belonging to the corporate image of the website, and another one with a human avatar (figure 3).

Banners utilizados en los test realizados en este estudio.

Figure 3. Banners used in the tests conducted in this study. From left to right: a) text-only banner 2) banner with an icon or image from the business c) banner with human avatar

3.2. Participants

Thirty-five regular Internet users have voluntarily participated in this test. All of them have occasionally made e-commerce purchases, but none of them is a frequent user of web chat assistants. The sample of participants is 30 years old on average (they oscillate between 25 and 35 years old), equally divided between both sexes (17 men and 18 women), and all of them with university studies. Among the participants, 20 users were French (8 men and 12 women) and the rest (9 men and 6 women) were divided among Spanish, Guatemalan and Venezuelan. Fifteen people took the test remotely with Skype, and 20 of them did it in person.

Muestra de usuarios y modalidad de test realizada

Table 1. Sample of users and types of test conducted

3.3. Testing platform

A testing environment was created in the website of iAdvize, a company focused on selling chat services. The interface with which the users had to take the tests was prepared in way that each task of the test showed a different banner. The type of banner rotated among the several tasks each user had to perform, so that all tasks involved using all 3 types of banner, and the bias of presenting the same banner for a same task was avoided.

3.4. Scenario and tasks

Before starting the test, users accessed iAdvize’s website (https://preprod.iadvize.com/) and navigated for half a minute to become familiarised with the contents of the website. Depending on the nationality of the users (whether they were French or from Spanish-speaking countries), they entered the website in one language or another.

This was the scenario presented to them: “You are in charge of the website of an online store. You have considered installing a chat system so that the clients might solve their consultations more easily and directly. In a few minutes, you have a meeting with the executives and you want to present your proposal, but you have to be well-informed before that. You go to the iAdvice website (https://preprod.iadvize.com/es/), a company you know is offering online chat services for websites, and look for the information you need”. Three questions were asked from this scenario:

  1. You want to have your chat available in one than more language. Is it possible to hire a chat service that is in more than one language with iAdvize?
  1. Your website is also available for tablets and smartphones. Can iAdvize offer chat services for those platforms?
  1. You want to personalize the access banner to the chat so that it is most adapted to the design of your website. How much would it cost to ask for this personalization service of the design? How would iAdvize deliver this design?

For each of these tasks, the user of the iAdvize website was presented with a specific banner to access the chat. When the user finished a task, the banner changed and another task was suggested, and so it went with the three tasks. The order wherein both the designs of the chat banner and the tasks to perform were presented changed so that the test results were not biased. Users did not have a time limit to conduct their consultations in the website, nor were they asked to use the chat.

The selected tasks were independent among them and equally difficult, so that changing their order would not alter the results. In all three tasks, information was required that did not appear directly on the website, which would probably lead users to use the chat system to solve their consultation. When users interacted with the chat system they received an answer from one of the testers, who solved their consultation from another computer. This was an adaptation of the classic “Wizard of Oz” technique which is often used in the Human-Computer Interaction discipline, based on making users believe they are interacting with a system when there is actually a person manipulating the answers (Kelley, 1984).

3.5. Testing sessions

The tests were conducted between December 2012 and January 2013. The 35 tests were taken by users in their usual environments, whether these were their homes or their workplaces. In 20 cases the tests were conducted in person, meaning the observer was with the user, whereas 15 tests were conducted remotely using the “share screen” option from the Skype software to observe how the users interacted with the interface. This kind of remote test has allowed testing users from different geographical areas, which makes the results of this study global.

During the testing sessions, notes were taken about navigation, behaviour and comments from the users with regard to the placement of the banner and the use of the chat service. After the test, a questionnaire was presented wherein users answered short questions and made comments about whether their favoured one type of banner or another. Two standard surveys were also passed on regarding usability, to measure the satisfaction of the users with the chat system: the SUS (System Usability Scale) survey (Brooke 1996) and the CSUQ (Computer System Usability Questionnaire) survey, this last one as a version of the PSSUQ (Post-Study System Usability Questionnaire) survey (Sauro, Lewis 2012).

3.6. Metrics

To answer the questions of this research, a user test was presented to 35 participants, consisting of 4 parts:

1) Observation of the users when interacting with the system to perform 3 tasks. Notes were taken regarding their actions, as well as their comments during the tests (the user talked when performing the tasks, a technique known as “think aloud”).

2) SUS satisfaction survey, based on 10 questions (table 2), 5 written in positive terms and 5 in negative terms, answered in a 5-point Likert scale. Each answer from the user is then subjected to a calculation so that the score is established between 0 and 100 points. According to the Jeff Sauro study based on 500 surveys, a system is satisfactory for a user when the SUS result exceeds 68 points (Sauro 2011).

3) CSUQ satisfaction survey (Sauro and Lewis, 2012), with 19 questions answered in a scale from 1 to 7 (table 3).

4) Final interview. From a script with open and closed questions, a dialogue is established with the user to know about his/her perception of the system. The starting questions are the following:

- Have you realized there were 3 different types of banner to access the chat?
- Has any button/banner called your attention more than the others?
- Which banner has generated more trustiness for you to conduct consultations?
- Which one do you prefer?
- Did you like using this chat system to find answers to your questions?

SUS

Questions

1.

I think that I would like to use this system frequently.

2.

1.   I found the system unnecessarily complex.

3.

I t   I thought the system was easy to use.

4.

I think that I would need the support of a technical person to be able to use this system.

5.

I found the various functions in this system were well integrated.

6.

I thought there was too much inconsistency in this system.

7.

I would imagine that most people would learn to use this system very quickly.

8.

I found the system very cumbersome to use.

9.

I felt very confident using the system.

10.

1.   I needed to learn a lot of things before I could get going with this system.

Table 2. SUS survey. Source: http://www.usabilitynet.org/trump/documents/Suschapt.doc

CSUQ

Questions

1.

Overall, I am satisfied with how easy it is to use this system.

2.

1.   It is simple to use this system.

3.

I can effectively complete my work using this system.

4.

I am able to complete my work quickly using this system.

5.

I am able to efficiently complete my work using this system.

6.

I feel comfortable using this system.

7.

I     It was easy to learn to use this system.

8.

1.   I believe I became productive quickly using this system.

9.

The system gives error messages that clearly tell me how to fix problems.

10.

Whenever I make a mistake using the system, I recover easily and quickly

11.

The information (such as on-line help, on-screen messages and other documentation) provided with this system is clear.

12.

 It is easy to find the information I need.

13.

The information provided with the system is easy to understand.

14.

T    The information is effective in helping me complete my work.

15.

The organization of information on the system screens is clear.

16.

The interface of this system is pleasant.

17.

I like using the interface of this system.

18.

T   This system has all the functions and capabilities I expect it to have.

19.

O   Overall, I am satisfied with this system.

Table 3. CSUQ survey. Questions 9 and 10 were answered as N/A because they were not applicable. Source: http://drjim.0catch.com/usabqtr.pdf

4. Results

This section presents the results extracted from the comments issued by users during the test (think aloud), as well as the notes taken by the observers and the results coming from the SUS and CSUQ surveys and the final interview.

Users started their searches navigating the website. If they did not find what they were looking for, they went to the FAQs section and, again, if they did not find it, they insisted on the navigation. After several minutes of fruitless search they decided to use the chat. Several users (specifically 6) asked the observers if they could use the chat, the rest used it spontaneously, without asking.


Figure 4. A user uses the chat service. This image comes from a recording made with Camstasia during a Skype-conducted test. The whole recording session can be found at http://www.youtube.com/watch?v=q2GDoREVIFY

During the test, users made their comments aloud. The data sought for this study were obtained from the observations annotated during the tests and the users’ comments. These data mainly regard user behaviour before a chat service, their perception of the system before and after using it.
At the beginning of the testing session, during the first task assigned, the users commented they could not perform the task entrusted:  “I can’t find the information, maybe to find it I would call the company”; “I don’t know if I’m doing it as I should, if I’m looking where I should, but I can’t find the information, I’ll keep looking”.

Once they assumed they were not going to find the information on the website or that the information was not readily available, they wondered whether to use the chat system or not:  “I’ll ask on the chat, it is usually easier”; Can I use the chat?”; “What if I click on the chat”

When tackling tasks 2 and 3, users did not give them that much thought and turned to the chat without looking for the information for a long time. After solving their consultations through the chat system, users tended to comment they would rather find the information directly on the page without someone attending to them personally, since they trust more what is written, although they recognised this system was very useful and were satisfied with its use: “What happens is that I trust more what I find on the page than something a person might say, I trust more what is written”; “I like this kind of chats, they are very helpful. I’ve had a recent experience with an online shop and this tool was very helpful to ask for some changes I needed”; “I prefer and trust more the information I might find on the site than that a person might afford me or what a chat might offer, although, after using it, I consider the chat might be able to save us some time”.

Once the test was finished, they were asked to answer two surveys. The first one, which was called SUS, gave a result between 0 and 100 for each person, considering a system is satisfactory if it obtains at least 68 points. The average on the survey is 84 points, which means the users’ satisfaction was high. From the 35 users, only 3 give it less than the 68 points considered the limit to claim it is a satisfactory system. In most cases, the results were very high (29 people gave it 80 points or more).

On the other hand, what can be gathered from the CSUQ survey results is that the average satisfaction score of the tested users is of 5.83 over 7. Ninenty per cent of the users were satisfied with the interfaces offered through the buttons. Within this group, 76% of them claimed to be “very satisfied” with the interface. Along the same line, 75% of them thought the system had all the characteristics and functions that can be expected from it.

From what was observed in the tests, the final interview tried to understand if there was any preference among the users when picking a chat with regard to its design. Twenty-nine out of 35 users indicated they preferred the banner incorporating a human avatar. Only 6 preferred the company logo, and no user chose the text-only option.

All users claimed they have realized the chat banner design had changed throughout the test. Among the three designs, they prefer the avatar, and the reason several users give is that the other two seem more like advertising or pop-ups because of their appearance. Their chosen banner seems more trust-worthy, more conspicuous when entering the website, and has called their attention more than the other two.

5. Conclusions

Although at first users insist on navigating to find the information on the website, when they do not succeed in their goal they go to the chat they are offered. It has to be reminded that this is a test, so in way users feel compelled to finish the tasks they are asked to do; a similar situation in a real context might have ended up without going to the chat, simply by closing the page and looking into another website, or by trying to locate the telephone number of the company to make the consultations immediately. Anyway, despite having answered in a laboratory environment, it is remarkable that all users decided to use the chat; this is already interesting as a test result.

The user test conducted has not considered time as part of its methodology, as often happens in this kind of studies. The reasons not to do it are, on the one hand, that usability aspects are not being measured, and on the other hand that the speed connection for each user might make them incomparable in this regard. Despite not considering time as metrics, it has been observed that users spend several minutes navigating before clicking on the access banner to the chat, and time is slightly less when the banner includes a human avatar. The reduction of time is consistent with the perception of the users with regard to the three designs: the human avatar is more conspicuous, it is clearer that it is a chat, and it makes the service look more trust-worthy, more serious.

Generally speaking, users might have preferred finding the information on the website and not having to use the chat, but the service has turned to be useful, so they value it positively. If the assistant attending to chat would not have given them answers, it is very possible they have evaluate it entirely differently in the surveys, considering that although they are being asked to evaluate the system they are unconsciously and simultaneously valuing the service received.

This research has shown that the use of chat services in websites tries to be an answer to the need for the website visitors to expand information. Often, this information has to be personalized, which is difficult or impossible to include in the website. These services are being used in several communication environments, both in e-commerce and institutional ones, within companies or for user services, as happens in many libraries. A well-implemented and well-attended chat service is a bonus for a website, since once the user tries it for the first time, if the experience is good it is more likely that he/she uses it again.

To finish, we would like to make clear that the conclusions for this study cannot be necessarily extrapolated to virtual assistants, where a machine is programmed to answers questions related to the contents of the website; therefore, they are not exactly like a chat service.

Acknowledgements

We thank the collaboration of all those who participated in the users tests, without whom this article would not be possible. And also to iAdvize, who provided us with a test environment to conduct our tests.

References

Âberg, Johan; Shahmehri, Nahid. (1999), Web assistants: towards an intelligent and personal Web shop. Proceedings of the 2nd Workshop on Adaptive Systems and User Modelling on the WWW, held in conjunction with UM '99, Banff, 5-12.

Âberg, Johan; Shahmehri, Nahid (2000). "The role of human Web assistants in e-commerce: an analysis and a usability study", Internet Research, 10:2, 114-125.

Brooke, J. (1996). "SUS: a quick and dirty usability scale". En P. W. Jordan, B. Thomas, B. A. Weerdmeester, & A. L. McClelland. Usability Evaluation in Industry. London: Taylor and Francis.

Clarkson, Diane; Johnson, Carrie; Stark, Elizabeth; McGowan, Brendan (2010). Making Proactive Chat Work: Maximizing Sales And Service Requires Ongoing Refinement. Forrester.

Clay Powers, Amanda; Nolen, David; Zhang, Li; Xu, Yue; Peyton, Gail. (2010): Moving from the Consortium to the Reference Desk: Keeping Chat and Improving Reference at the MSU Libraries, Internet Reference Services Quarterly, 15:3, 169-188

IFLA (2006), Recomendaciones para el servicio de referencia digital. IFLAnet, http://archive.ifla.org/VII/s36/pubs/drg03-s.htm

Kelley, J. F (1984). “An iterative design methodology for user-friendly natural language office information applications”. ACM Transactions on Office Information Systems, 2:1, 26-41.

Merlo, José Antonio (2005). “Servicios públicos de referencia en línea” BiD, 14, http://www.ub.edu/bid/14merlo2.htm

Radford, M. L., & Kern, M. K. (2006). A multiple-case study investigation of the discontinuation of nine chat reference services. Library & Information Science Research, 28:4, 521–547.

Sauro, Jeff (2011). Measuring Usability with the System Usability Scale (SUS), http://www.measuringusability.com/sus.php

Sauro, Jeff; Lewis, James R. (2012) Quantifying the User Experience: Practical Statistics for User Research. Capítulo 8: Standardized Usability Questionnaires. Morgan Kauffman, 185-240.

Wells, Catherine A.Wallace (2003). Location, Location, Location: the importance of placement of the chat request button. Reference & User Services Quarterly, 43:2, 133-137.

Last updated 16-05-2013
© Universitat Pompeu Fabra, Barcelona