Developing an Instrument That Explores Customers’ Experience with Banking E-Services during the COVID‐19 Pandemic in Saudi Arabia

This study aims to develop a research instrument that measures e-banking customers’ experience of online banking services. In addition, the study aims to validate and define the newly developed instrument and to explore the factors influencing customers’ experiences with banking e-services in Saudi Arabia. An electronic database search was used to formulate a new questionnaire to address the study’s main themes (constructs). Experts provided face and content validity, and items that failed to contribute to an explanation of the study concepts were excluded. The first draft contained 10 themes and 78 items, but once the questionnaire’s conceptual framework had been clearly defined, the number of items was reduced to 75. The questionnaire represents customers’ experiences with the banking e-service, and it provides researchers and experts with a tool for assessing the factors that influence the promotion of e-services in the banking sector.

Nowadays, e-banking services are no longer a luxury; rather, they have become a necessity, mandated by unexpected global changes, such as the changes associated with the COVID-19 pandemic (Marcu, 2021). For this reason, scientific research has an important role to play in examining the scope of the dissatisfaction with e-banking. Hence, it is important for researchers to assess and explore the areas in which e-banking services have failed and to highlight changes that could improve these services. One method of doing this is to explore customers' experiences with e-banking services, how they found them, where they worked well for them and where they did not, and then to explore methods for improving and increasing the scope of these services. Another purpose of this study was to develop a culturally sensitive instrument that could examine customer's experiences and satisfaction with e-banking services, particularly during the pandemic. The areas of failure and inadequacy as perceived by the customers could serve as a guide for policymakers and stakeholders involved in this vital sector, enabling them to address the issues and improve both the services and the people's level of satisfaction.

Objectives of the Study
The aim of this study was to develop a research instrument that measures e-banking customers' experiences with online banking services. In addition, the study aimed to validate and define the newly developed instrument and to explore the factors that influence customers' experiences of banking e-services.

Design
A new questionnaire was formulated to address the study's main themes (constructs). Experts examined the content and provided both face and content validity. In the process of building and validating the newly developed questionnaire, the experts' opinions were employed to identify items that failed to contribute clearly to the study concepts explored in the questionnaire.

Study Plan
The study started with data extraction from electronic databases, such as ProQuest and Emerald. A preliminary study was conducted using interviews with experts to identify the relevance of constructs and to verify the indicators to be used for measuring the various constructs (i.e., their face and content validity). The items developed to explore the constructs were sent to 10 panelists, who initially provided face validity and then, at a later stage, content validity.

Development of the Customer Experience of Banking E-Services Scale (CEBES)
The first section of the questionnaire included the demographic variables of age, gender, educational qualification, occupation, which service failures had been experienced, and income. The second section initially contained 78 items related to the construct, with responses ranging from strongly disagree (1) to strongly agree (5).
Four steps were followed during the development of the CEBES questionnaire: identifying themes for the scale, constructing the scale, judgmental evidence, and psychometric evaluation of the resulting themes and items (Zamanzadeh et al., 2014).

Identifying Themes for the Scale
The themes (domains) for the study construct (i.e., customer experience of banking e-services) were created based on the literature found from a search of the electronic databases Psych Info, Emerald, SAGE, Google Scholar, and ProQuest business. An extensive search of the literature was performed to find relevant studies published between 2010 and 2022. The keywords used in this search were as follows: bank, e-banking, e-services, customer, experience, and satisfaction. These words were searched for separately and in combination. A total of 243 items were generated following manual selection based on the relevance of the article to the study purpose. This number was further reduced to 28 articles, which were analyzed in detail, and all reported factors and items were arranged to create the first draft of the CEBES.

Constructing the Study Scale
Once the CEBES conceptual framework was clearly defined, the first draft contained 10 themes and 78 items. Each construct was measured by asking the respondents to respond to each operationally defined item on a range from 'strongly disagree' to 'strongly agree'. A set of items under each construct assessed the customers' experiences with that particular aspect of the banking e-services. The themes of the CEBES were as follows: perceived information quality (6 items), digital commitment (4 items), employee performance quality (5 items), justice (17 items), behavioral intentions (3 items), e-service recovery satisfaction (6 items), service satisfaction ijbm.ccsenet.org International Journal of Business and Management Vol. 18, No. 5;2023 (5 items), safety (10 items), cultural impact on bank e-service choice (7 items), and perceived service recovery quality (12 items).

Judgmental Evidence
The scale was assessed for face and content validity (Zamanzadeh et al., 2015). First, four experts subjectively considered whether the questionnaire items recovered from the published literature were relevant, essential, and fell within the study purpose and content area (Cook & Beckman, 2006).
In addition, 15 participants were asked to complete the CEBES and to evaluate the questionnaire items for clarity of language, simplicity, readability, consistency of style and formatting, and as appropriate for the participants' culture. Their responses ensured the face validity of the scale (Yusoff, 2019). Based on the feedback from the experts and from the target group, any unclear and ambiguous items were revised and rewritten to reflect the meaning clearly, and then revised again by a language specialist to ensure clarity of wording and grammar (Connell et al., 2018). After the initial subjective evaluation of the scale, the researchers used statistical tests to quantify the results. To estimate content validity, four experts evaluated the instrument to assess whether the content of the CEBES items adequately reflected the themes.

Psychometric Evaluation of the Resulting Themes and Items
The content validity index (CVI) was used to determine the items' relevance. Content experts were asked to assess the relevance of each item on a 4-point Likert scale (1 = non-relevant to 4 = very relevant). Both item CVI (I-CVI) and scale-level CVI (S-CVI) were then computed by adding item responses (Polit et al., 2007). Two methods were used here: the content validity average (CVI/Avg) and the universal agreement (UA) among experts (S-CVI/ UA), for both items and scale (Rodrigues et al., 2017;Yusoff, 2019).
I-CVI values greater than 0.79 indicated that the item was relevant, values between 0.70 and 0.79 showed that the item needed revision, and if the value was below 0.70, the item was eliminated (Vakili and Jahangiri, 2018). Similarly, the S-CVI was calculated by using the number of items in the tool that had achieved a rating of "relevant" (Vakili & Jahangiri, 2018). S-CVI/UA was calculated by adding all items with an I-CVI equal to 1 divided by the total number of items. The S-CVI/Avg was calculated by taking the sum of the I-CVIs divided by the total number of items (Vakili & Jahangiri, 2018). An S-CVI/UA ≥ 0.8 and an S-CVI/Avg ≥ 0.9 have excellent content validity (Rodrigues et al., 2017).
The response process was quantified by computing the face validity index (FVI) for item clarity and comprehension, like the CVI. The experts were asked to rate their evaluation on a 4-point scale ranging from not clear (1) to very clear (4). They were then asked to determine the importance of each item on a 4-point Likert scale from not important (1) to very important (4). The item impact score was calculated according to the following formula (Impact Score = Frequency (%) × importance item score), where the frequency was the percentage of raters scoring 3 or 4 for ''important'', and importance was the average score for the item based on the Likert scale (Zamanzadeh et al., 2015). The evaluation criteria depend on the value of the item impact score: items scoring ≥1.5 were kept, and items with lower scores were removed (Zamanzadeh et al., 2015). In the final draft, following the experts' evaluation, 75 items were included in the CEBES questionnaire. After quantifying the face and content validity by statistical testing, the resulting CEBES included 10 themes, which included 75 items. The Cronbach's alpha coefficient was used to estimate the reliability and internal consistency of the CEBES (Taherdoost, 2016).

Face Validity
The face validity index was measured using a five-point scale (1-5) for three items: clarity and comprehension, appropriateness, and spelling. Five experts participated in this phase and gave their opinions on the items included in the questionnaire. As shown in the table below, the clarity and comprehension average scores were 4.44/5, the average appropriateness score was 4.52, and the spelling score was 4.52. The face validity index for the scale (78 items) was 4.49/5 (S-FVI: 89%). For the item face validity index (I-FVI), the highest value of 4.93 was given by the raters to the statement "The bank follows my request and updates me frequently," while the lowest value of 3.60 was given to two items: "The information provided is from designated online/phone service personnel," and "The service provider's goal (beyond profit) is to treat the customer well (benevolence)." Four experts participated in this phase and gave their opinions on the items on the questionnaire. The scale content validity index (S-CVI) was 4.51/5.00 (90.02%) for the 76-statement questionnaire. As shown in the table below, the item-content validity index (I-CVI) for relevancy was 4.55/5, and the I-CVI for essentiality was 4.46 out of 5. The highest value given to the appropriateness average score was 4.52 for items "Using e-services is enjoyable" and "The backup of customer information is well-maintained during any emergency shut down." The lowest average I-FVI on the relevancy was "I am a tech-savvy (passionate) person," which was rated 3 out of 5. different areas around the world. All the themes emerging from this review were included in a new questionnaire. Items associated with each theme were included regardless of the context in which they had initially been reported. This step was necessary so that all possible items were included in the questionnaire.
Experts in the field of banking e-services were consulted to give their opinion on two areas of validation: face and content validity. Clarity and comprehension, appropriateness, and spelling made up the face validity components, and these were measured using a 5-point scale. Content validity comprised relevancy and essentiality, and this was also measured using a 5-point scale. Three of the initial 78 items were reported as having weak content or face validity. Therefore, those three items were excluded, and 75 items were finally included in the new questionnaire.
This study is expected to have implications for both theory and practice. This newly developed scale addresses an essential topic that has surfaced as being of major concern to many customers and bankers around the world, including in Saudi Arabia, due to the impact of the pandemic (Ozkan et al., 2020). The instrument includes items that experts considered necessary to include when examining customers' experiences with e-banking services. These items cover aspects that bankers and banking experts consider essential if customers' willingness to replace conventional banking services with e-services in their everyday practice is to be improved (Jebarajakirthy & Shankar, 2021).
Instilling confidence and trust in customers are a bank priority; therefore, it is imperative to address the factors that influence their experience with the e-services (Ozuem et al., 2021). Generally, developing the trust of customers in relation to a service must be based on essential components, such as fairness, justness, timely, and safe services.
The themes and items included in this study show that customers' experiences are multifactorial and complex. Hence, the bank should be vigilant in identifying and tackling every factor to promote better levels of e-service engagement, remove doubt, and increase levels of trust in banking services. Based on the themes of the questionnaire, managers and bankers need to develop and deliver transparent e-services and ensure that the products or services they sell to customers meet their needs and are of superior quality. They also need to ensure that services are delivered to customers in a timely manner while maintaining safety and vigilance. Furthermore, the process of receiving, treating, and responding to complaints, errors, and failures should be transparent, and the response should be so managed that customer satisfaction and convenience are ensured.
The findings in this study emphasize that the experiences of international and Saudi e-service customers differ little. In that respect, both the experts' opinions (CVI and FVI) and the PCA resulted in the majority of items, all originally extracted from international literature, being kept. The main limitation of the study relates to its sample and the sampling technique employed. The study relied on convenience sampling. This newly developed instrument needs to be tested further on a larger sample and in different communities to prove its stability and relevance to a Middle Eastern country. It could be tested in other similar communities, such as the Gulf Cooperation Council countries.
This study has a number of limitations, including the limited number of experts available to review and give their opinion on the CEBES. In addition, this study included the use of convenience sampling, which may not accurately represent the population of interest. Moreover, although the CEBES underwent face and content validity, it did not run construct validity tests, such as confirmatory factor analysis. The number of themes and their corresponding items can be viewed as large and thus might need to undergo an item/theme reduction process so as to keep only items that contribute significantly to the explanation of the construct, such as principal component analysis. CEBES can be viewed as a promising scale that could be used by many researchers interested in customer experience with banking e-services once it has been validated following proper statistical testing procedures.

Conclusion
CEBES is a new tool, developed through carefully controlled processes to examine customer experience with banking e-services. The results show that the CEBES content and construct validity are satisfactory. However, future studies may confirm, add, or delete some of the items. Although CEBES has been tested for face and content validity, its construct validation has yet to be explored. In addition, it could also be submitted to a process of reduction, as the 10 themes, with their corresponding 75 items, should be weighted for their contribution to explaining the construct. Finally, the CEBES could be used to measure and improve the customer experience of banking e-services in Saudi Arabia and other countries in the region. It could also be used in other countries after testing its suitability for use in a particular context.