Usability of an eLearning Professional Development Program for Elementary Classroom Teachers: ASSIST for Disruptive Classroom Behaviours

An eLearning professional development (PD) program, ASSIST for Disruptive Classroom Behaviour, was developed using an iterative user-centred design approach. This program was designed to support teachers in the implementation of teacher-implemented in-class interventions for disruptive classroom behaviour (DCB). The objective of the current study was to determine the usability of this program. Overall, the results suggest that end-users (i


Introduction
Disruptive classroom behaviour (DCB) is defined as off-task behaviours regarding schoolwork, non-compliance to teacher requests, and aggression (Schaeffer et al., 2006;Yoder & Williford, 2019).Previous studies suggest that it is difficult to estimate the prevalence of DCBs accurately (Irvin et al., 2004;Skiba et al., 2008).However, based on reviews of studies conducted in the United States of America, approximately 40%-55% of students in Grades 1-12 have recorded incidents of DCB (i.e., records of the behaviour from classroom teachers; Kaufman et al., 2010;Skiba et al., 1997;Wright & Dusek, 1998).Disruptive behaviour has adverse effects on both students and teachers by reducing the time for instruction and is associated with increased work-related stress among teachers (Klassen & Chiu, 2010;Luiselli et al., 2002).In addition, students who display DCB are at risk for impaired social relationships and poor academic and post-school outcomes (McDaniel et al., 2017).As such, teachers need ways to manage and mitigate the adverse effects of disruptive behaviour in their classrooms effectively (Martino et al., 2016;Nelson et al., 2002).
Previous meta-analyses (Wilson et al., 2003;Wilson & Lipsey, 2007) suggest that evidence-based interventions (EBIs) for addressing DCB include behavioural, such as teaching desired behaviours and providing rewards and feedback.Three key barriers prevent teachers from accessing and implementing these EBIs.First, it is uncommon for behavioural interventions to be taught during teacher training (Author et al., 2015;McCrimmon, 2015).Second, limited professional development (PD) focuses on teacher-implemented in-class interventions to address DCB (Author et al., 2015;Thomas & Deeley, 2004).Finally, when PD programs are available, they tend to provide limited individualized content and support for teachers, making it difficult for teachers to generalize from these programs to implement strategies in their classrooms (Borko, 2004;Dede et al., 2009).There is a clear need for an accessible PD program that provides teachers with the knowledge and skills to manage in-class disruptive behaviour.A way to address these barriers to access is eLearning.
One way to address the previously described barriers to the use of EBIs for DCB is by delivering PD to teachers through an eLearning program.eLearning involves using information technology to produce learning materials and teach learners (Arkorful & Abaidoo, 2014).This approach to delivering PD can provide information and guidance on the implementation of in-class interventions in a way that is more available, accessible, cost-effective, scalable, and customizable than typical in-person PD programs (Arkorful & Abaidoo, 2014;Borrelli & Ritterband, 2015).In response to the impracticality of traditional PD programs, several effective PD programs have been developed using eLearning methods (Dede et al., 2009).
ASSIST (Accessible Strategies Supporting Inclusion for Students by Teachers; previously called Teacher Help) is an eLearning program designed to support classroom teachers in providing EBIs to students in grades 1 to 12 with neurodevelopmental disorders (NDDs) (Author et al., 2012;Author et al., 2013;Author et al., 2014;Author et al., 2014;Author et al., 2015;Author et al., 2019;Author et al., 2020;Author et al., 2021).The ASSIST program has been extensively evaluated (Author et al., 2012;Author et al., 2013;Author et al., 2014;Author et al., 2014;Author et al., 2015;Author et al., 2019;Author et al., 2020;Author et al., 2021).Feedback from the most recent program evaluation suggested that teachers would be interested in a program focused on behaviour management strategies for all students.As a result, a new program called ASSIST for Disruptive Classroom Behaviour was developed for teachers in the elementary grades (i.e., Grades 1-6).An outline of the ASSIST for Disruptive Classroom Behaviour program can be found in Table 1.
The purpose of the current study was to assess the usability of the ASSIST for Disruptive Classroom Behaviour program for end-users (i.e., classroom teachers) and stakeholders (i.e., school administrators, specialized teachers, and school psychologists).Usability studies are conducted to identify problems with a practice or product, uncover opportunities to improve a practice or product and learn about end-user behaviour and preferences related to a practice or product (Moran, 2019).Usability studies allow for practices or products to be refined and better meet the needs of the target user population.Typically, usability studies are structured based on theoretical frameworks.For the current study, the user-experience honeycomb developed by Morville and Sullenger (2010) was used as a theoretical framework, which divides usability into the following components: accessibility, credibility, desirability, findability, usability, usefulness, and value.The current study addressed five research questions:

Participants
The sample consisted of 19 participants.End-users were 11 elementary classroom teachers.Stakeholders included 3 specialized teachers, 2 school administrators, 2 school psychologists, and a behaviour specialist.

End-Users
The end-user group included Canadian classroom teachers who were currently teaching in the elementary grades (i.e., Grades 1-6) and had worked in their current role for at least three years.Of the 34 classroom teachers who expressed interest in the study (i.e., completed the Screening Questionnaire), three were not eligible because they were not currently teaching in an elementary classroom.Consents were signed by 31 classroom teachers, but 20 did not begin their program review.Of those who began their review of the programs (n =11), six completed a full review and completed all questionnaires, two reviewed all six sessions but did not complete the overall program questionnaire, one reviewed the first two sessions and completed the corresponding questionnaires, and two only reviewed the first session and completed the corresponding questionnaire.
Most end-users (n = 8) were teaching in early elementary (i.e., Grades 1-3) and taught in public schools (n = 9).The average years of teaching were 14.82, and the average age of end-users was 42.09 years.Most end-users were female (n =10), white/Caucasian (n = 10), had a master's degree (n = 9), were located in Nova Scotia (n = 7), and lived in a city with less than 500,000 people (n = 6).In terms of training in behaviour management, some (n = 3) had not received pre-service training about behaviour management, but most (n = 7) had received PD about behaviour management.The number of students with DCB that end-users reported having taught ranged from 3 to more than 20.The most frequently reported sources of information about behaviour management for DCB were internet searches (n = 9), books (n = 7), and school board-mandated PD (n = 6).Full demographic information for end-users is in Table 2.

Stakeholders
The stakeholder group included specialized teachers (e.g., learning centre teachers, resource teachers, school counsellors), school administrators, and school psychologists working with elementary students, including some with DCB, for at least three years.Of the 21 stakeholders who expressed interest in participating in the current study, two were excluded due to not having worked with elementary school children with DCB for at least three years.Of those who consented (n = 11), three did not begin their program review.Of those who did begin their review (n = 8), all but one reviewed the entire program; the remaining participant reviewed only the first session.
The average years in their current role was 5.38, with half of the stakeholders have worked as classroom teachers with an average of 8.75 years in that role.Most stakeholders worked in public schools (n = 7), were female (n = 7), were white/Caucasian (n = 8), and had a master's degree (n = 8); half lived in Nova Scotia (n = 4).The type of community stakeholders lived in was distributed across rural communities (n = 2), towns (n = 2), and cities under 500,000 people (n = 3).Most (n = 7) stakeholders received pre-service training about behaviour management, and all stakeholders received PD about behaviour management.The most frequently reported sources of information about DCB were books (n = 5), school board mandated PD (n = 5), internet searches (n = 4), and professional or community organizations (n = 4).Full demographics for stakeholders are reported in Table 3.

Measures
All questionnaires were delivered via the Research Electronic Data Capture (REDCap©; Harris et al., 2019), a secure online platform for research databases.

Eligibility Questionnaire
The Eligibility Questionnaire consisted of seven items to (1) assess whether participants met the inclusion criteria and (2) determine to which group they belonged (i.e., end-users or stakeholders).The questions asked whether potential participants worked in the Canadian school system, whether they worked in the public or private school system, what position they held (i.e., classroom teachers were asked whether they taught elementary students, and stakeholders were asked whether they had experience working with elementary students with DCB and how many years of experience they had working with elementary school students).

Participant Characteristics Questionnaire
The Participant Characteristics Questionnaire was an 8-item questionnaire used to characterize the sample.Questions asked for participants' age, sex, ethnicity or cultural heritage, the highest level of education, approximate size of the community they worked in, and the province in which they worked.Additionally, participants were asked whether they worked in a public or private school.If the participant was an end-user (i.e., classroom teacher) or had identified that they had previously worked as a classroom teacher, they were also asked to report the number of years spent teaching.

Previous Learning Questionnaire
The Previous Learning Questionnaire was a four-item questionnaire that asked questions related to the participant's training related to the management of DCB.Participants were asked to report on their pre-service and PD training related to behaviour management, to estimate how many students they have taught who displayed DCB (for classroom teachers), and how they would typically find new information about DCB.
2.2.4 Session Feedback Questionnaire (SFQ) and Program Feedback Questionnaire (PFQ) The SFQ and PFQ each consisted of 20 items across two sections, but the SFQ asked about individual sessions while the PFQ asked about the overall program.The first section asked participants to rate each session or the program overall on a 5-point Likert scale (i.e., 1 = Not at all, 5 = Extremely) across seven usability characteristics (i.e., usefulness, usability, desirability, findability, accessibility, credibility, and value) based on Morville's user experience honeycomb (Morville & Sullenger, 2010).The first section also asked participants to explain why they selected their ratings for the session or the program overall.The second section of the SFQ and the PFQ asked participants to rate how strongly they agreed with statements about readiness for use, whether the worksheets/activities helped to make the program flexible enough to tailor the intervention to a specific student, and their satisfaction with the session overall on a 5-point Likert scale (i.e., 1 = strongly disagree, 5 = strongly agree).For each of these statements, participants who responded negatively (i.e., not ready, not able to tailor to students, not satisfied) were asked to provide a written justification for their Likert rating.Participants were also asked if they thought that information should be added, removed, or reordered and, if they did, to provide details.
All recruitment methods referred potential participants to a dedicated page on the ASSIST website (www.assistforteachers.ca/usability).This web page provided information about the study and what participation involved and provided a hyperlink to REDCap.When participants followed the hyperlink, they were presented with the Eligibility Questionnaire, which determined their eligibility to participate.Finally, those who were eligible were presented with the consent form.

Study Participation
After completing the online Information and Consent Form, participants were presented with the Participant Characteristics Questionnaire and the Previous Learning Questionnaire.Once they completed these questionnaires, participants were given access to the ASSIST program.They were also given access to the SFQs to complete after reviewing each program session and the PFQ to complete after reviewing the entire program.At the end of each session, participants were reminded to complete the associated SFQ and access the next session.Once participants had reviewed all six sessions, completed all SFQs, and completed the PFQ, their participation was complete, and compensation was distributed to participants (i.e., a $40 gift card for Amazon.caper participant once all six questionnaires were completed).

Data Analysis Plan
Descriptive statistics (i.e., means, standard deviations, percentages) were calculated for quantitative responses.For the open-ended SFQ and PFQ questions, responses were coded using the method from previous similar usability studies (e.g., Author et al., 2019) and Morville's seven dimensions of user experience (Morville & Sullenger, 2010).Two coders (M.O., A.I.) coded all data independently, with 95% inter-rater agreement calculated as a straight percentage across all data.Discrepancies were reviewed and resolved by a supervising researcher (P.C.).The coded data were then divided into two categories: positive feedback, which expressed agreement with or support for aspects of the session or program, and constructive feedback, which indicated potential barriers to the usability of the program or suggestions for improvement.Finally, the frequencies of positive and constructive feedback were tallied.Constructive feedback mentioned in responses to the SFQs and not in the PFQ was categorized as session-specific constructive feedback.The constructive feedback from the PFQ was categorized as overall constructive feedback.To identify changes that should be made to the program, constructive feedback that was mentioned consistently (i.e., by at least three participants) was prioritized for future program development (Author et al., 2021).The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Usability Ratings
Based on a scale of 1 to 5, mean usability ratings ranged from 3.67 to 4.50 for end-users and 3.57 to 4.57 for stakeholders.Based on the summed usability ratings for each session and the program overall, with a maximum score of 35, usability ratings were consistently positive across the program and between end-users and stakeholders.The sum of mean ratings for end-users was very similar across sessions, ranging from Σ = 27.27 to Σ = 29.14.For stakeholders, the mean ratings were also very similar across sessions, ranging from Σ = 28.43 to Σ = 29.87.Based on the summed ratings, usability ratings were consistently positive for each usability component across Morville's usability honeycomb (Morville & Sullenger, 2010).For end-users, the sum of mean ratings was very similar across components and ranged from Σ = 27.08 to Σ = 29.52.Similarly, for stakeholders, the sum of the mean ratings was very similar across components and ranged from Σ = 27.05 to Σ = 30.38.See Tables 4 and 5 for the mean usability ratings for end-users and stakeholders, respectively.

Qualitative Feedback
Participants provided a total of 73 qualitative responses.Of those responses, 42 were categorized as positive feedback and 31 as constructive feedback.The most frequent positive feedback, which comprised over half (n = 21, 51%), focused on the usability of the program.Usability refers to the ease of use of the program, including whether the information presented can be implemented (Morville & Sullenger, 2010).For example, one stakeholder noted, "It is great and cannot wait for other teachers to begin using [this program]".A classroom teacher stated, "I learned valuable and practical information".Another classroom teacher responded, "the information is excellent.Even if only pieces of this program were implemented, it can help teachers".
The second most frequent positive feedback focused on the findability of the program (n = 11, 27%).Findability refers to how easily users can navigate the program (Morville & Sullenger, 2010).For example, one classroom teacher noted that the program was "easy to understand".Another said, "the program was well laid out and presented clearly".One of the stakeholders noted that the program was "very user friendly", and another stated, "[the] information was well organized, realistic to implement and clear".
The third most frequent positive feedback focused on the acceptability of the program (n = 10, 24%).Acceptability refers to the extent to which participants felt that the program was appropriate for the target end-users (Sekhon et al., 2017).For example, one classroom teacher said, "I would not change anything about the program", and one stakeholder remarked, "overall, the course was well done".
The most frequent constructive feedback, which comprised 48% (n = 15), focused on changes to the program's content-specifically, including more tangible or downloadable materials.For example, one participant stated, "printable posters would be good."The addition of tangible or downloadable materials met our criterion to be considered a priority in future program development (i.e., mentioned by at least three participants).No other theme reached this criterion.

Readiness, Flexibility, and Satisfaction Ratings
A summary of the mean ratings of session readiness, flexibility, and participant satisfaction can be found in Tables 6 and 7. Based on a scale of 1 to 5, mean readiness ratings ranged from 3.75 to 4.22 for end-users and 3.43 to 4.00 for stakeholders.Mean flexibility ratings ranged from 3.88 to 4.22 for end-users and 3.57 to 4.14 for stakeholders.Finally, mean satisfaction ratings ranged from 4.00 to 4.27 for end-users and 3.86 to 4.14 for stakeholders.Overall, mean ratings consistently supported the program's readiness for use by other teachers, its flexibility in being able to be tailored to individual students, and teachers' satisfaction across the sessions of the program and between end-users and stakeholders.

Discussion
The overall objective of the current study was to assess the usability of the newly developed ASSIST for Disruptive Classroom Behaviour eLearning program.Both end-users (i.e., teachers) and stakeholders (i.e., administrators, specialized teachers, school psychologists, and behaviour specialists) reported on their experiences with DCB.Most of the classroom teachers reported not receiving in-service training in behaviour management but that they had received PD for behaviour management.However, the most frequently reported sources of information about managing disruptive behaviour were internet searches and books.Most stakeholders reported they had received in-service training, and all had received PD about behaviour management.As with classroom teachers, one of the most frequently reported sources of information was books.However, unlike classroom teachers, an equally frequent source of information was receiving PD from school boards.These results suggest a discrepancy in the training available to classroom teachers and stakeholders, further supporting the need for ASSIST for Disruptive Classroom Behaviour.
The current study results suggest that both groups of participants found the program to be highly usable.Across sessions, participants provided high overall usability ratings.They also provided consistently high ratings across Morville and Sullenger's (2010) user experience honeycomb components.The usability ratings were supported by the positive feedback received from participants, which spoke primarily to the program's usability, findability, and acceptability.These results are like those seen in the most recent usability assessment of another ASSIST program (Author et al., 2020).However, one piece of constructive feedback, the need for additional tangible or downloadable materials, was received and will inform future refinement of the program.
The current study results also suggest that both participant groups perceived the program as ready for use and flexible.As with the usability ratings, the ratings of readiness and flexibility (i.e., being able to be tailored to individual students) were consistently high across sessions.Similarly, participants provided consistently high satisfaction ratings with the program.Again, these results are consistent with those found in the most recent usability study of another ASSIST program (Author et al., 2020).
The consistently high ratings seen in the current study results are likely at least in part due to the user-centred approach taken to develop the new ASSIST program.To meet the needs of end-users, participant responses directly informed the development of ASSIST for Disruptive Classroom Behaviour.Overall, the results of this user-centred approach reflect those of previous studies of eLearning programs that have taken a user-centred approach to development, resulting in increased usability and acceptability (Klock et al., 2019;Parker et al., 2020).

Strengths and Limitations
Interpreting the current study results must be done in the context of several strengths and limitations.First, while the study sample is small (six end-users and seven stakeholders), this is within the overall recommended sample size of 12 to 20 participants for usability testing (Barnum, 2011), as well as meeting the "magic number" of five participants for each group (Lewis, 1994;Nielsen, 2000;Virzi, 1992).A strength is that the sample of end-users was varied (grades 1-6), and the stakeholder sample was quite heterogeneous, containing perspectives from several positions within the school system.
The current study's limitations are related to the ability to generalize from this sample.Teachers were mostly teaching early elementary grades and were from Nova Scotia, limiting the generalizability of the results to the experiences of classroom teachers across grades and provinces.However, it is unlikely that teacher perspectives would vary significantly across provinces based on how standardized teacher education is in Canada (Perlaza & Tardiff, 2016).There was also a high drop-out rate, suggesting that the results only reflect the experiences of participants who were motivated or able to participate in this research.The drop-out rate may be partially explained by the COVID-19 pandemic, which has caused disruptions to the regular operations of the Canadian school system (Bresge & Hobson, 2021).However, their ratings and qualitative feedback were consistent with other participants.The current study is also focused on teachers' perceptions of the program and not on the real-world implementation of the program.As previous research into the behaviour of consumers has suggested, the use of self-report data alone can lead to inaccurate conclusions about consumer preferences (Bell et al., 2018;Venkatraman et al., 2015).
An additional limitation of the current study is related to the rationale for developing the new ASSIST program.As stated in the introduction, the estimates of the prevalence of DCB range from 40%-55% based on incidents of DCB recorded on student records for students in the United States in grades 1-12 (Kaufman et al., 2010;Skiba et al., 1997;Wright & Dusek, 1998).However, those estimates do include students who have had single incidents of DCB, potentially inflating the prevalence rates.The literature does provide estimates of the prevalence of disruptive behaviour disorders (DBD; American Psychiatric Association, 2013), but there is currently no clear prevalence of chronic DCB that does not meet the criteria for a DBD.It is unlikely that classroom teachers would use ASSIST to modify students' behaviour with few incidents of DCB instead of using it for students who chronically display DCB.As such, the rationale for developing the new ASSIST program is based on the best available prevalence estimates for DCB, but that may limit its application to classroom settings.

Future Directions
Taken together, the results of the current study indicate a clear trajectory for the future research and development of the ASSIST for Disruptive Classroom Behaviour program.First, the new program will need to address the need for additional tangible and downloadable content, as indicated by participants.Specifically, participants noted that they would like to download hard copies of the program content.Printable summaries of program content should be added to each session.Once this modification of the program is complete, the program's effectiveness should be assessed.Specifically, effectiveness testing should involve giving end-users access to the program and asking them to implement it.Ratings of their students' behaviour should be collected from an independent observer to determine whether using the program decreases DCB.

Conclusions
Based on the results of the current study, ASSIST for Disruptive Classroom Behaviour is a promising method for delivering EBIs for DCB to classroom teachers.Overall, end-users and stakeholders found the program highly usable, suggesting that the program may have high uptake once it is refined, tested for effectiveness, and made publicly available.
a) Which usability components meet and do not meet the needs of end-users and stakeholders for each of the six sessions and overall program?b) Should any changes be prioritized in the future refinement of the program?c) Do end-users and stakeholders perceive the program to be flexible enough to tailor implementation to the needs of specific students?d) Were end-users and stakeholders satisfied with the program?e) Do end-users and stakeholders perceive the program to be ready to use by classroom teachers?