Saudi Graduate Students’ Perceptions Toward Automated Writing Feedback for Improving Academic Writing

Over the last few years, we have witnessed a growth in the studies that focus on the feedback on writing in a second language, including computer-based feedback (Zhang & Hyland, 2018). Automated writing feedback refers to the immediate feedback generated by computers to correct or improve the text. The current study discusses the perceptions of Saudi graduate students toward using automated writing feedback to improve their academic writing. The design of the research is a quantitative survey study. A questionnaire consisting of 13 items is the instrument for data collection. The sample size is 46 (male and female) Saudi graduate students. By using descriptive statistics, the findings revealed positive perceptions of the use of automated writing feedback tools. The findings of this study could shed light on the importance of conducting more research on the impact of automated writing feedback in one particular writing aspect, such as organization, content, coherence, unity, or style.


Introduction
Of all language skills, writing is considered as one of the foundation pillars of language learning. It gives a learner the power of independence, fluency, and comprehension to create a meaningful text. Undoubtedly, proficiency in English has become like possessing the fabled Aladdin's lamp that opens the linguistic gate to different international fields. Thus, English provides linguistic power (Kachru, 1986), especially for those who learn English as a foreign or as a second language (EFL or ESL). Speaking of EFL and ESL, Hyland (2003) asserts that "learning to write in a foreign or second language mainly involves linguistic knowledge and the vocabulary choices, syntactic patterns, and cohesive devices that comprise the essential building blocks of texts" (p. 3). Moreover, non-native English speakers (NNESs) have unique personal goals for writing, and these goals differ based on cultural norms and expectations. However, they all agree on improvement as their main aim (Cumming, 2006).
Academic writing for non-native English speakers is considered to be a challenging task (Hyland, 2003) because of the long time that a learner needs to acquire competence. Indeed, graduate students are required to accomplish various writing tasks for different courses of study, but all tasks agree on one thing, which is a competent academic style. Students must be able to recognize the particular style of expression that is academic writing. Academic writers are required to employ a variety of rules and patterns that distinguish their unique writing from other types. In addition, they must ensure that their papers are written properly. For this purpose, using a word processing program that primarily focuses on finding spelling and basic grammar errors is unfortunately not helpful for academic writers (Swales & Feak, 2012). However, other tools may make a huge difference, such as automated writing feedback (AWF). creating comments, and writers have complete freedom to modify their writing based on these suggestions. It offers many features that encourage and motivate students to review their work by themselves. Moreover, it saves time by presenting immediate feedback, in addition to the plagiarism percentage. The content clarity, the style of writing, word choice, spelling, grammar, organization, accuracy, and so on are all examples of features that are introduced via these electronic tools in less than a minute. Nevertheless, these tools may lead writers to neglect the role of human beings in the process of feedback (Dikli & Bleyle, 2014).
Using electronic tools for writing feedback is a debatable (Lavolette, Polio, & Kahng, 2015;Ranalli, Link, & Chukharev-Hudilainen, 2017). Many questions have been raised in terms of the effectiveness of these tools. Are the suggestions that are produced correct or not? Is using automated writing feedback a fad, or is it here to stay? Does it propose accurate suggestions regardless of the topic and writing style? Furthermore, the major question for this study is whether the tool provides excellent feedback to improve the graduate students' writing or not.

The Significance of the Study
Nowadays, we are witnessing a wide variety of technologies that have a massive impact on second language writing (Hyland, 2003). According to Li et al. (2017), they can be divided into three main categories: web 2.0, automated writing evaluation, and corpus-based tools. Unlike web 2.0 and corpus-based tools, there are limited studies regarding automated writing feedback, particularly in Saudi Arabia. In addition, few studies have investigated the effective use of automated writing feedback in terms of students' perceptions (Zhang, 2020). Consequently, this study intends to explore the perceptions of Saudi graduate students toward the use of different automated writing feedback tools in their academic writing. The findings of this study will be significant in providing fruitful knowledge about the effectiveness of immediate writing feedback.

The Research Question
• What are the perceptions of Saudi graduate students about the use of automated writing feedback for their academic writing?

Feedback on Writing
In the field of learning to write in a second language, providing feedback is seen as the primary way of encouraging the development of the students' writing (Hyland, 2003). Feedback is considered an integral part of the process of learning. Although it takes different forms, such that some are traditional while others are modern, they all aim to provide comments to improve the learners' writing.
Many studies have affirmed the importance of feedback on writing skills and its effective impact on ESL learners. Razali and Jupri (2014), for example, asked which of three types of feedback are preferred by ESL students in a university in Malaysia. Results showed that students were encouraged by all types, and that critical feedback is more successful in improving the students' written work. This is supported by the study of Srichanyachon (2012) which revealed that feedback from a teacher affects the language accuracy and motivation of the students. Moreover, different methods of feedback are essential for the development of writing skills for ESL students in Saudi Arabia (Grami, 2005(Grami, , 2010Alshuraidah & Storch, 2019).

Computer-Based Feedback
New technology has revolutionized the way we learn. Multiple studies have been conducted to investigate computer-based feedback, especially in the last (e.g., Zhang, 2017;Wilson & Roscoe, 2020;Grami, 2020). As mentioned earlier, the process of editing and revising writing becomes much easier and quicker for the students because of the use of various electronic feedback tools that provide a large number of features.
Several recent studies have examined automated feedback to reveal positive findings. One example is that English as a foreign language (EFL) students who engage effectively with the feedback on their writing will benefit, as Zhang (2017) emphasizes. The researcher conducted a study focused on students' engagement with a freely available automated evaluation system. Results showed that AWF had a positive impact on EFL writing students.
Similarly, Wilson and Roscoe (2020) conducted a study with a sample of sixth graders that focused on the effectiveness of automated writing evaluation systems. The procedure was to compare between automated writing systems and word-processing software by using multiple metrics. Results found that students who used automated writing evaluation had more positive writing self-efficacy and better performance on a state English language arts test.
immediate feedback as a helpful tool that increased the process of learning. The data of 23 university students were collected by using pre-and post-writing samples, in addition to a questionnaire and semi-structured interviews.
Conversely, other studies discovered the negative side of using computer-based feedback. A study conducted in three college writing classes in Taiwan investigated the way that students interact with automated evaluation of their writing. The findings, generally, were not positive; students were frustrated. In addition, they believed it limited their learning about writing (Chen & Cheng, 2008).
Furthermore, according to Lai (2010), some EFL students were frustrated by using an automated writing system. They believed it was less helpful because it presented vague, fixed, and repetitive feedback. In addition, it was affected by the speed of the Internet connection; in other words, those with a slow Internet connection did not receive appropriate feedback. The findings of this study were elicited from 22 learners of a second language (Lai, 2010).
Literature reviews have indicated that despite the mixed results of the previous studies, it seems that the use of automated writing feedback by EFL and ESL learners does need further studies, especially in Saudi Arabia.

Methodology
According to Mills and Gay (2019), a quantitative approach is used to analyze numbered data to explain phenomena of interest. Therefore, the methodological approach taken in the current study is a quantitative methodology based on survey research. Survey research aims to present a numeric description of opinions or attitudes of a sample for a specific population (Cresswell, 2014).

The Subjects
The study sample was 46 Saudi graduate students. The sample of the study was chosen randomly. The subjects were asked to complete the questionnaire anonymously and without any time limits. All the subjects were classified according to their gender, age, academic major, and GPA, as shown in Table 1. In addition, to ensure all subjects had fulfilled the target sample, one non-Saudi subject was eliminated.

Data Collection Procedure
The questionnaire was adopted and modified from previous studies to address the research objectives (Lai, 2010;Ekinci & Ekinci, 2020). It consists of two sections of close-ended questions, as presented in the Appendix. The ijel.ccsenet.org International Journal of English Linguistics Vol. 12, No. 6;2022 first section is related to the demographic information, which is about the gender, age, academic major, and GPA of the subjects, while the other section consists of 13 items that measure the graduate students' perceptions of the use of automated writing feedback for improving their academic writing. Respondents were asked to rate their level of agreement on a five-point Likert scale ranging from 1 to 5 on a 5-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree).
The questionnaire aimed to provide a general perspective of the graduate students toward using automated writing feedback. To ensure that the items of the questionnaire had direct, clear, and understandable meanings, a pilot study was conducted on one graduate student. Accordingly, changes in the introduction of the questionnaire were applied. In brief, a rephrased definition of automated writing feedback tools with examples was provided in the introduction of the questionnaire to avoid ambiguity.
The questionnaire was designed by Google Forms. Then, it was distributed online to the respondents. After one week of sending the questionnaire, 46 responses that matched the target sample were collected.

Data Analysis
The collected data were analyzed using the statistical package for social science SPSS version 28. Responses to questionnaire items were coded on an ordinal scale (higher numbers related to agreement and lower numbers related to disagreement with the statement). The numerical data of the statements were calculated to find the means and standard deviations of the items as a whole using descriptive statistics. Then, the items of the questionnaire were analyzed in terms of percentages to show the level of agreement.

Findings
Saudi graduate students responded to the questionnaire items to determine their perceptions toward the use of automated writing feedback tools for improving their academic writing. Then, their responses were analyzed to reveal their opinions toward using automated writing feedback tools. After that, the descriptive findings were reported.
The first section of the questionnaire, which is the background section, included questions about the respondents' gender, age, academic major, and GPA. Figure 1 indicates a higher proportion of males than females, accounting for almost 60%. Moreover, more than half the subjects were in their twenties, whereas the smallest group was people in their forties, as Figure 2 presents. People in their thirties represent 30% of the total number of the sample. In addition, the academic majors of two-thirds of the graduate students who participated in the survey were applied linguistics and engineering. Computer science, finance, and public and business administration were the other academic majors. Furthermore, Figure 3 provides the distribution of GPA among the graduate students. There are almost equal representative samples of all three categories of GPA. The findings are presented as descriptive statistics. The mean scores and standard deviations are shown in Table  2, which shows the basic descriptive statistics of 13 items that were used in the questionnaire.
Among the items, the highest agreement was observed for the first and third statements, which were about the use of automated writing feedback tools and whether they helped the students focus on their errors. These items had an equal mean (M = 4.19), while the standard deviation was (SD = .806, SD = .957), respectively. The second and the third highest statements (M = 4.04, SD = .868), (M = 4.02, SD = .906) were that automated writing feedback tools decrease writing anxiety level and increase confidence in writing. However, the least  Vol. 12, No. 6;2022 agreement among the statements was that receiving feedback with automated tools was preferred over teacher or peer feedback, (M = 3.32, SD = 1.28). Overall, it is easy to see that the averages of the statements range between 4.19 and 3.32. The items of the questionnaire have been analyzed in terms of percentages in descriptive statistics to show the level of agreement on a five-point Likert scale as in Table 3.  Vol. 12, No. 6;2022

Discussion
In Table 3, the questionnaire items "strongly agree" and "agree" are interpreted as positive values, while "strongly disagree" and "disagree" are interpreted as negative. Therefore, the positive and negative values with more than 50% are subject to discussion. As a result, all items in the questionnaire are interpreted as positive values.
As Table 3 shows, the majority of the agreement was on the third statement, as 83% of the students believed that the use of automated writing feedback tools helped them to focus more on their errors. In addition, 81% of the graduate students revised their writing when they used AWF, and the confidence in their writing increased by 74%. This is consistent with the case study that was conducted by Zhang (2017). According to Zhang (2017), the systems of immediate writing feedback stimulated the students to make more revisions and focus on their errors. Accordingly, the more they revised their writing, the higher scores they would get, i.e., many revisions improved the scores of the students which led directly to increased motivation and confidence in their writings. One participant of Zhang's study stated, "I found that multiple revisions could improve my scores, I was greatly motivated to revise my draft" (p. 324).
Furthermore, it can be seen from the data in Table 3 that 76% of the graduate students of the current study agreed that receiving AWF helped them write better essays, and 79% of them agreed that AWF improved their writing, especially in language use and style. Language use and style refer to the ability to apply different styles and rules of the English language, such as capitalization, punctuation, and spelling. These findings are similar to the study of Schraudner (2014) who analyzed his students' writing assignments on three different automated writing feedback tools (Grammarly, Paperrater, and Writewords). Most of their errors were in punctuation, spelling, and sentence structure. Therefore, the use of AWF tools helps students to improve their writings.
Moreover, the study conducted by Zhang and Hyland (2018) showed that many students did not appreciate their teachers' feedback. The authors suggested that this response might have been caused by weak institutions that did not provide good teaching and learning environments. This finding may support the agreement in the eighth statement, which showed a preference for using AWF instead of teacher and peer feedback. In fact, 54% of the graduate students in the current study preferred the feedback of automated writing tools, whereas only 29% preferred the traditional ways of feedback. In addition, nearly two-thirds of students thought that AWF improved their writing in terms of content, development, and organization. However, Zhang and Hyland (2018) believed that teachers' feedback offers better comments in these areas than a pattern-matching machine. Table 3 shows that 80% of the students use AWF, and 74% of them highly value the comments of automated writing feedback tools. The findings are similar to the study of Grami (2020), which revealed that a large number of students had positive perceptions of the use of technology in second language writing, particularly for automated writing feedback. Moreover, students appreciated the larger number of comments that they received from electronic tools, compared to the few comments from their teachers.
In addition, 70% of Saudi graduate students believed that the use of AWF improved their writing in focus and meaning. Indeed, focus and meaning in writing refer to the ability of the writer to present a unified essay with a single idea or thesis. These results differ from the study of McCarthy, Roscoe, Likens and McNamara (2019) which indicated that the use of electronic spelling and grammar checkers had no effect on the unity of the essays.
To answer the research question, Table 3 shows that Saudi graduate students had positive perceptions of the use of different automated writing feedback tools to improve their academic writing. In addition, these tools significantly influenced their editing, revising, and proofreading.

Conclusion
Generally, the development of technologies greatly influences second language writing (Hyland, 2003). Given the increasing the use of automated writing feedback at the present time (Grami, 2020), it is important to understand the perceptions of Saudi graduate students of how these tools affect their writing, especially academic writing. The purpose of the current study was to determine the perceptions of Saudi graduate students toward the use of tools that offered immediate feedback. The main finding was that graduate students have positive perceptions about the use of AWF. In addition, it had a valuable impact in terms of editing, revising, and improving their writing.
Further studies may use the findings of this study to determine the extent of the suitability of automated writing feedback in one writing aspect, such as organization, content, coherence, unity, or style. Such a focus may include investigating one automated writing feedback tool, such as Grammarly.
Some limitations should be considered. First, the current study is limited by the low number of responses. In ijel.ccsenet.org International Journal of English Linguistics Vol. 12, No. 6;2022 addition, the present study has only examined the perceptions of graduate students, regardless of the actual impact on their grades, which could be evaluated through pre-and post-writing samples.