Atrás Website quality evaluation: a model for developing comprehensive assessment instruments based on key quality factors [article]

Website quality evaluation: a model for developing comprehensive assessment instruments based on key quality factors [article]

Por Equipo OCM

17.01.2023

Imatge inicial

Abstract

Purpose

The field of website quality evaluation attracts the interest of a range of disciplines, each bringing its own particular perspective to bear. This study aims to identify the main characteristics – methods, techniques and tools – of the instruments of evaluation described in this literature, with a specific concern for the factors analysed, and based on these, a multipurpose model is proposed for the development of new comprehensive instruments.

Design/methodology/approach

Following a systematic bibliographic review, 305 publications on website quality are examined, the field's leading authors, their disciplines of origin and the sectors to which the websites being assessed belong are identified, and the methods they employ characterised.

Findings

Evaluations of website quality tend to be conducted with one of three primary focuses: strategic, functional or experiential. The technique of expert analysis predominates over user studies and most of the instruments examined classify the characteristics to be evaluated – for example, usability and content – into factors that operate at different levels, albeit that there is little agreement on the names used in referring to them.

Originality/value

Based on the factors detected in the 50 most cited works, a model is developed that classifies these factors into 13 dimensions and more than 120 general parameters. The resulting model provides a comprehensive evaluation framework and constitutes an initial step towards a shared conceptualization of the discipline of website quality.

[Article link]

Keywords

Website quality, Website, Website evaluation, Expert Analysis, Heuristic evalutation, User experience, Usability.

Introduction

Over the last three decades, websites have become one of the most important platforms on the Internet for disseminating information and providing services to society. Shortly after their first appearance, the need to evaluate website quality became evident. The earliest analyses were developed by experts in human-computer interaction and comprised usability heuristics () and design principles () and rules (), aimed at improving interfaces. In parallel, inspection of the technical specifications of websites and the verification of standards for application development emerged (). Subsequently, interest has grown in designing for an optimal user experience () and quantifying user experience perceptions ().

This evolution highlights the fact that, from its very outset, website quality evaluation has taken different approaches, analysing a range of different characteristics and employing a variety of methodologies. This may well be an indication that the discipline of website quality has yet to be fully consolidated and that its field of study is not readily delimited. This conclusion is further strengthened by the fact that the field has yet to agree on a formal definition for itself (; ).

Over the last twenty years, a number of different authors have offered their definitions. , in one of the earliest attempts, define website evaluation as the act of establishing a correct and exhaustive set of user requirements, ensuring a site provides useful content that meets user expectations, while setting usability goals. For , website quality is determined primarily by the degree to which a website's features are perceived by users to meet users' needs and to reflect the site's overall excellence; while for website quality constitutes the attributes that contribute to its usefulness to consumers. Most recently, “the ability of a website to meet the expectations of its users and owners, as determined by a set of measurable attributes” is the definition proposed by , p. 3).

While these definitions coincide in the need for websites to satisfy user expectations, they differ in terms of the factors that should come under examination. Indeed, as an emerging research area, website quality in the literature has yet to achieve a common operationalization and each study tends to highlight different measures that are relevant to its own particular context (). When evaluating the quality of a website, it is important to know what can be measured and how to measure it (). On the other hand, the evaluation of website quality is not the same as undertaking a traditional quality evaluation since it involves multi-criteria decision-making (), making it a particularly complex activity.

Thus, it is critical to identify the factors and characteristics that should be evaluated. In this regard, we can identify a first traditional focus to the question that might be defined as functional. Here, the focus is on the inspection of a website's inherent characteristics, including its content, information architecture and visual design, as well as its technical and operational features, linked to technology and security (; ). The second approach, which we can define as experiential, focuses on user experience and perceptions and examines such factors as usability, accessibility, satisfaction and interaction (; ). A third approach is more strategic in nature, focusing as it does on meeting the site owner's objectives, and on the use of performance, visibility and positioning metrics, among others (; ).

All evaluation instruments of website quality, regardless of their particular focus, have in common the fact that they seek to conceptualise and delimit the target they seek to measure using some type of unit. The literature employs different names for these units, be it attributes, characteristics, variables (), factors or criteria (). Their use is largely synonymous, being terms that allude to the distinctive features of a certain property of the analysed entity, that is, websites. propose addressing these properties, from the most general to the most specific, as dimensions, parameters and indicators, a terminology that we employ in this article. Dimensions constitute the generic properties of a website that we might want to evaluate. These can be divided, in turn, into more specific units, referred to as parameters. Finally, the indicators are the core elements of analysis that make it possible to operationalize and assess the parameters. Thus, for example, the dimension of “information architecture” includes “labelling” as one of its parameters and this, in turn, includes, among others, “conciseness”, “syntactic agreement”, “univocity” and “universality” as its indicators.

To evaluate these indicators, website quality studies employ different methodologies, experimental and quasi-experimental as well as descriptive and observational, typical of the associative or correlational paradigm. Likewise, such evaluations might adopt either qualitative or quantitative perspectives, undertaking both subjective and objective assessments. Similarly, they might employ either participatory and direct methods – as they record user opinions – or non-participatory or indirect methods – such as inspection or web analytics.

In the case of participatory methods, user experience (UX) studies have focused on user preferences, perceptions, emotions and physical and psychological responses that can occur before, during and after the use of a website (). The most frequently employed techniques are testing – which resorts to the use of such instruments as usability tests, A/B tests and task analyses; observation – centred on ethnographic, think-aloud and diary studies–; questionnaires – including surveys, interviews and focus groups; and biometrics – which uses eye tracking, psychometric and physiological reaction tests, to name just a few ().

Among the most common methods of inspection, we find expert analysis, a procedure for examining the quality of a site or a group of sites employing guidelines, heuristic principles or sets of good practices (). The most common instrument is that of heuristic evaluation, in which a group of specialists judge whether each element of a user interface adheres to principles of usability, known as heuristics (; ).

Other instruments employed in undertaking inspections include checklists, in which each indicator usually takes the form of a question, and whose answer – typically binary – shows whether or not the quality factor under analysis is met; scales, where each indicator is assigned a relative weight based on the importance established or calculated by the experts for each parameter under evaluation (); indices, metrics that not only evaluate a website's quality, but also how good it is in comparison with similar sites (); and analytical systems, typically qualitative instruments of either a general or specialized nature, which are mainly aimed at evaluating individual websites, conducting benchmarking studies, or for use as web design guides. These systems of analysis vary depending on the factors that their creators consider key to determine the quality of a website (). In this study, in order to standardise their name, we refer to them as “evaluation instruments”.

These instruments can be applied manually, that is, by experts in website quality or those with an understanding of the discipline; in a semi-automated fashion, with the help of software and specialised validators (); or in a fully automated manner (), using techniques of artificial intelligence () or natural language processing (). Thus, content analysis – a major technique in website quality inspection – can be applied in one of three ways.

Finally, we also find techniques aimed at the strategic analysis of performance (), including return on investment; search engine positioning (); competitiveness, including web analytics () and webmetrics (). Additionally, within this group we find mathematical models for decision making with multiple, hybrid, intuitive or fuzzy criteria (). By employing criteria at different, unconnected, levels, these models establish a hierarchy of evaluable factors (). They are used, among other applications, to weight user responses and generate indices of satisfaction or purchase intention.

Thus, this review of the literature highlights that the study of website quality is multidimensional. Moreover, such evaluations can employ a range of different focuses and employ multiple techniques and instruments. With this as our working hypothesis, we seek here to determine the properties that characterise the main website quality evaluation instruments, as well as to identify the dimensions, parameters and indicators that they analyse in each case. Based on these outcomes, we develop a comprehensive evaluation framework (). This, in addition to unifying the different concepts examined and helping to clarify the broad panorama comprised by website quality publications, should serve both as a guide and model for the development of new instruments that can be employed by professionals and researchers alike in this field.


Citation

Morales-Vargas, A., Pedraza-Jimenez, R. and Codina, L. (2023), "Website quality evaluation: a model for developing comprehensive assessment instruments based on key quality factors", Journal of Documentation, Vol. 79 No. 7, pp. 95-114. https://doi.org/10.1108/JD-11-2022-0246


Useful links

References

  • ​​​​​Adepoju, S.A. and Shehu, I.S. (2014), “Usability evaluation of academic websites using automated tools”, 3rd International Conference on User Science and Engineering (i-USEr), IEEE, pp. 186-191.
  • Akincilar, A. and Dagdeviren, M. (2014), “A hybrid multi-criteria decision making model to evaluate hotel websites”, International Journal of Hospitality Management, Vol. 36, pp. 263-271.
  • Aladwani, A.M. and Palvia, P.C. (2002), “Developing and validating an instrument for measuring user-perceived web quality”, Information and Management, Vol. 39 No. 6, pp. 467-476.
  • Al-Qeisi, K., Dennis, C., Alamanos, E. and Jayawardhena, C. (2014), “Website design quality and usage behavior: unified theory of acceptance and use of technology”, Journal of Business Research, Vol. 67 No. 11, pp. 2282-2290.
  • Anusha, R. (2014), “A study on website quality models”, International Journal of Scientific and Research Publications, Vol. 4 No. 12.
  • Apple (2018), Human Interface Guidelines, Apple.
  • Bevan, N., Carter, J. and Harker, S. (2015), “ISO 9241-11 Revised: what have we learnt about usability since 1998?”, in Kurosu, M. (Ed.), Human-Computer Interaction: Design and Evaluation, Springer International Publishing, Cham, pp. 143-151.
  • Booth, A., Sutton, A. and Papaioannou, D. (2016), Systematic Approaches to a Successful Literature Review, 2nd ed., SAGE Publications, London.
  • Cajita, M.I., Rodney, T., Xu, J., Hladek, M. and Han, H. (2017), “Quality and health literacy demand of online heart failure information”, Journal of Cardiovascular Nursing, Vol. 32 No. 2, pp. 156-164.
  • Cao, K. and Yang, Z. (2016), “A study of e-commerce adoption by tourism websites in China”, Journal of Destination Marketing and Management, Vol. 5 No. 3, pp. 283-289.
  • Chiou, W.-C., Lin, C.-C. and Perng, C. (2010), “A strategic framework for website evaluation based on a review of the literature from 1995–2006”, Information and Management, Vol. 47 Nos 5-6, pp. 282-290.
  • Choi, W. and Stvilia, B. (2015), “Web credibility assessment: conceptualization, operationalization, variability, and models”, Journal of the Association for Information Science and Technology, Vol. 66 No. 12, pp. 2399-2414.
  • Codina, L. (2018), Revisiones bibliográficas sistematizadas: procedimientos generales y framework para ciencias humanas y sociales, Universitat Pompeu Fabra, Barcelona.
  • Codina, L. and Pedraza-Jiménez, R. (2016), “Características y componentes de un sistema de análisis de medios digitales: el SAAMD”, in Pedraza-Jiménez, R., Codina, L. and Guallar, J. (Eds), Calidad en sitios web: Método de análisis general, e-commerce, imágenes, hemerotecas y turismo, Editorial UOC, Barcelona, pp. 15-40.
  • Cristóbal Fransi, E., Hernández Soriano, F. and Marimon, F. (2017), “Critical factors in the evaluation of online media: creation and implementation of a measurement scale (e-SQ-Media)”, Universal Access in the Information Society, Vol. 16 No. 1, pp. 235-246.
  • Daraz, L., Morrow, A.S., Ponce, O.J., Beuschel, B., Farah, M.H., Katabi, A., Alsawas, M., Majzoub, A.M., Benkhadra, R., Seisa, M.O., Ding, J., Prokop, L. and Murad, M.H. (2019), “Can patients trust online health information? A meta-narrative systematic review addressing the quality of health information on the internet”, Journal of General Internal Medicine, Vol. 34 No. 9, pp. 1884-1891.
  • Dey, A., Billinghurst, M., Lindeman, R.W. and Swan, J.E., II (2018), “A systematic review of 10 years of augmented reality usability studies: 2005 to 2014”, Frontiers in Robotics and AI, Vol. 5, p. 37, doi: 10.3389/frobt.2018.00037.
  • Dueppen, A.J., Bellon-Harn, M.L., Radhakrishnan, N. and Manchaiah, V. (2019), “Quality and readability of English-language internet information for voice disorders”, Journal of Voice, Vol. 33 No. 3, pp. 290-296.
  • Ecer, F. (2014), “A hybrid banking websites quality evaluation model using AHP and COPRAS-G: a Turkey case”, Technological and Economic Development of Economy, Vol. 20 No. 4, pp. 758-782.
  • European Commission (2016), “Europa web guide”, The EU Internet Handbook, available at: http://ec.europa.eu/ipg/ (accessed 17 June 2018).
  • Fernández-Cavia, J., Rovira, C., Díaz-Luque, P. and Cavaller, V. (2014), “Web Quality Index (WQI) for official tourist destination websites. Proposal for an assessment system”, Tourism Management Perspectives, Vol. 9, pp. 5-13.
  • Garrett, J.J. (2011), The Elements of User Experience: User-Centered Design for the Web and beyond, 2nd ed., New Riders, Indianapolis.
  • Gough, D., Oliver, S. and Thomas, J. (2017), An Introduction to Systematic Reviews, 2nd ed., SAGE Publications, London.
  • Grant, M.J. and Booth, A. (2009), “A typology of reviews: an analysis of 14 review types and associated methodologies”, Health Information and Libraries Journal, Vol. 26 No. 2, pp. 91-108.
  • Gregg, D.G. and Walczak, S. (2010), “The relationship between website quality, trust and price premiums at online auctions”, Electronic Commerce Research, Vol. 10 No. 1, pp. 1-25.
  • Hasan, L. (2014), “Evaluating the usability of educational websites based on students’ preferences of design characteristics”, International Arab Journal of E-Technology, Vol. 3 No. 3, pp. 179-193.
  • Health On the Net (2017), Principles: the HON Code of Conduct for Medical and Health Web Sites, Ginebra, Suiza.
  • Huang, Z. and Benyoucef, M. (2014), “Usability and credibility of e-government websites”, Government Information Quarterly, Vol. 31 No. 4, pp. 584-595.
  • Ismailova, R. and Inal, Y. (2017), “Web site accessibility and quality in use: a comparative study of government web sites in Kyrgyzstan, Azerbaijan, Kazakhstan and Turkey”, Universal Access in the Information Society, Vol. 16 No. 4, pp. 987-996.
  • ISO (n.d), “Terms & definitions”, Online Browsing Platform (OBP), available at: https://www.iso.org/obp/ (accessed 18 February 2021).
  • ISO (2008), “ISO 9241-151:2008 ergonomics of human-system interaction. Part 151: guidance on World wide web user interfaces”, International Organization for Standardization.
  • Jainari, M.H., Baharum, A., Deris, F.D., Mat Noor, N.A., Ismail, R. and Mat Zain, N.H. (2022), “A standard content for university websites using heuristic evaluation”, in Arai, K. (Ed.), Intelligent Computing. SAI 2022. Lecture Notes in Networks and Systems, Springer, Cham, Vol. 506, doi: 10.1007/978-3-031-10461-9_19.
  • Jayanthi, B. and Krishnakumari, P. (2016), “An intelligent method to assess webpage quality using extreme learning machine”, International Journal of Computer Science and Network Security, Vol. 16 No. 9, pp. 81-85.
  • Kamoun, F. and Almourad, M.B. (2014), “Accessibility as an integral factor in e-government web site evaluation: the case of Dubai e-government”, Information Technology and People, Vol. 27 No. 2, pp. 208-228.
  • Karkin, N. and Janssen, M. (2014), “Evaluating websites from a public value perspective: a review of Turkish local government websites”, International Journal of Information Management, Vol. 34 No. 3, pp. 351-363.
  • Kaur, S. and Gupta, S.K. (2014), “Key aspects to evaluate the performance of a commercial website”, IJCA Proceedings on International Conference on Advances in Computer Engineering and Applications, Foundation of Computer Science (FCS), pp. 7-11.
  • Kaushik, A. (2010), Web Analytics 2.0: the Art of Online Accountability & Science of Customer Centricity, Wiley Publishing, Indianapolis, Indiana.
  • Keselman, A., Arnott Smith, C., Murcko, A.C. and Kaufman, D.R. (2019), “Evaluating the quality of health information in a changing digital ecosystem”, Journal of Medical Internet Research, Vol. 21 No. 2, e11129.
  • Król, K. and Zdonek, D. (2020), “Aggregated indices in website quality assessment”, Future Internet, Vol. 12 No. 4, p. 72.
  • Krug, S. (2014), Don't Make Me Think, Revisited: A Common Sense Approach to Web and Mobile Usability, 3rd ed., New Riders, Pearson Education, Berkeley, CA.
  • Kurosu, M. (2015), Human-computer Interaction: Design and Evaluation, in Kurosu, M. (Ed.), Springer International Publishing, Cham.
  • Lavrakas, P.J. (2008), Encyclopedia of Survey Research Methods, 5th ed., Sage Publications, Thousand Oaks, CA.
  • Law, R. (2019), “Evaluation of hotel websites: progress and future developments”, International Journal of Hospitality Management, Vol. 76, pp. 2-9.
  • Law, R., Qi, S. and Buhalis, D. (2010), “Progress in tourism management: a review of website evaluation in tourism research”, Tourism Management, Vol. 31 No. 3, pp. 297-313.
  • Leavitt, M.O. and Shneiderman, B. (2006), Research-based Web Design & Usability Guidelines, 2nd ed., U.S. Department of Health & Human Services, Washington, DC.
  • Lee-Geiller, S. and Lee, T.D. (2019), “Using government websites to enhance democratic E-governance: a conceptual model for evaluation”, Government Information Quarterly, Vol. 36 No. 2, pp. 208-225.
  • Leung, D., Law, R. and Lee, H.A. (2016), “A modified model for hotel website functionality evaluation”, Journal of Travel and Tourism Marketing, Vol. 33 No. 9, pp. 1268-1285.
  • Lopezosa, C., Codina, L. and Gonzalo-Penela, C. (2019), “Off-page SEO and link building: general strategies and authority transfer in the digital news media”, Profesional de la Información, Vol. 28 No. 1, pp. 1-12, e280107.
  • Maia, C.L.B. and Furtado, E.S. (2016), “A systematic review about user experience evaluation”, in Marcus, A. (Ed.), Design, User Experience, and Usability: Design Thinking and Methods, Springer International Publishing, Cham, pp. 445-455.
  • Manchaiah, V., Dockens, A.L., Flagge, A., Bellon-Harn, M., Azios, J.H., Kelly-Campbell, R.J. and Andersson, G. (2019), “Quality and readability of English-language internet information for tinnitus”, Journal of the American Academy of Audiology, Vol. 30 No. 01, pp. 031-040.
  • Martín-Martín, A., Orduña-Malea, E., Thelwall, M. and Delgado López-Cózar, E. (2018), “Google Scholar, Web of Science, and Scopus: a systematic comparison of citations in 252 subject categories”, Journal of Informetrics, Vol. 12 No. 4, pp. 1160-1177.
  • Morales-Vargas, A., Pedraza-Jiménez, R. and Codina, L. (2020), “Website quality: an analysis of scientific production”, Profesional de la Información, Vol. 29 No. 5, p. e290508.
  • Nielsen, J. (2020), “10 usability heuristics for user interface design”, Nielsen Norman Group, available at: https://www.nngroup.com/articles/ux-research-cheat-sheet/ (accessed 3 May 2021).
  • Nikolić, N., Grljević, O. and Kovačević, A. (2020), “Aspect-based sentiment analysis of reviews in the domain of higher education”, Electronic Library, Vol. 38, pp. 44-64.
  • Orduña-Malea, E. and Aguillo, I.F. (2014), Cibermetría: Midiendo El Espacio Red, UOC, Barcelona.
  • Paz, F., Paz, F.A., Villanueva, D. and Pow-Sang, J.A. (2015), “Heuristic evaluation as a complement to usability testing: a case study in web domain”, 2015 12th International Conference on Information Technology - New Generations, IEEE, pp. 546-551.
  • Quiñones, D. and Rusu, C. (2017), “How to develop usability heuristics: a systematic literature review”, Computer Standards and Interfaces, Vol. 53, pp. 89-122.
  • Rekik, R., Kallel, I. and Alimi, A.M. (2015), “Quality evaluation of web sites: a comparative study of some multiple criteria decision making methods”, Intelligent Systems Design and Applications (ISDA), 2015 15th International Conference. International Conference on Intelligent Systems Design and Applications, pp. 585-590.
  • Rekik, R., Kallel, I., Casillas, J. and Alimi, A.M. (2018), “Assessing web sites quality: a systematic literature review by text and association rules mining”, International Journal of Information Management, Vol. 38 No. 1, pp. 201-216.
  • Rocha, Á. (2012), “Framework for a global quality evaluation of a website”, Online Information Review, Vol. 36 No. 3, pp. 374-382.
  • Rosala, M. and Krause, R. (2020), User Experience Careers: What a Career in UX Looks Like Today, Fremont, CA.
  • Sanabre, C., Pedraza-Jiménez, R. and Vinyals-Mirabent, S. (2020), “Double-entry analysis system (DEAS) for comprehensive quality evaluation of websites: case study in the tourism sector”, Profesional de la Información, Vol. 29 No. 4, pp. 1-17, e290432.
  • Sauro, J. and Lewis, J.R. (2016), Quantifying the User Experience: Practical Statistics for User Research, 2nd ed., Elsevier/Morgan Kaufmann, Waltham, MA.
  • Semerádová, T. and Weinlich, P. (2020), “Looking for the definition of website quality”, in Semerádová, T. and Weinlich, P. (Eds), Website Quality and Shopping Behavior: Quantitative and Qualitative Evidence, SpringerBriefs in Business, Cham, pp. 5-27.
  • Shneiderman, B. (2016), “The eight golden rules of interface design”, Department of Computer Science, University of Maryland.
  • Shneiderman, B., Plaisant, C., Cohen, M.S., Jacobs, S., Elmqvist, N. and Diakopoulos, N. (2016), Designing the User Interface: Strategies for Effective Human-Computer Interaction, 6th ed., Pearson Higher Education, Essex.
  • Sun, Y., Zhang, Y., Gwizdka, J. and Trace, C.B. (2019), “Consumer evaluation of the quality of online health information: systematic literature review of relevant criteria and indicators”, Journal of Medical Internet Research, Vol. 21 No. 5, e12522.
  • Thielsch, M.T. and Hirschfeld, G. (2019), “Facets of website content”, Human–Computer Interaction, Vol. 34 No. 4, pp. 279-327.
  • Tognazzini, B. (2014), “First principles of interaction design (revised and expanded)”, askTog.
  • Toxboe, A. (2018), “Design patterns”, UI Patterns: User Interface Design Patterns Library, available at: http://ui-patterns.com/patterns (accessed 17 June 2018).
  • Tullis, T. and Albert, W. (2013), Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics, 2nd ed., Morgan Kaufmann, Waltham, Massachusett.
  • W3C (2016), “Web design and applications standards”, World Wide Web Consortium.
  • Whitenton, K. (2021), “Triangulation: get better research results by using multiple UX methods”, Nielsen Norman Group, available at: https://www.nngroup.com/articles/triangulation-better-research-results-using-multiple-ux-methods/ (accessed 4 March 2021).
  • Xanthidis, D., Argyrides, P. and Nicholas, D. (2009), “Web Site Evaluation Index: a systematic methodology and a metric system for the assessment of the quality of web sites”, in Demiralp, M. (Ed.), Proceedings of the 8th WSEAS International Conference on Telecommunications and Informatics. Electrical and Computer Engineering Series, p. 194.
  • Yin, R.K. (2015), Qualitative Research Form Start to Finish, 2nd ed., The Guilford Press, London.

 

Multimedia

Categorías:

ODS - Objetivos de desarrollo sostenible:

Els ODS a la UPF

Contact