An Investigation of Simultaneous Prompting to Teach an Addition Algorithm to Preschool Students

,


Introduction
In the United States, federal legislation has clearly established that a primary responsibility of every public school is accounting for the academic achievement of its students.Specifically, the No Child Left Behind Act (NCLB, 2002) served as the impetus for annual statewide tests of students' academic achievement, which continue in accordance with NCLB's successor, the Every Student Succeeds Act (ESSA, 2015).As one way of ensuring a school's focus on students' academic achievement, this legislation has schools that do not meet performance goals initiate and execute an improvement plan.
Among a school's students are those with disabilities who receive special education services through the provisions of another federal law, the Individuals with Disabilities Education Act (IDEA, 2004).This multifaceted legislation, which has the most direct impact of any federal law on school-based programming unique to these students, tasks schools to identify and subsequently provide students with disabilities services that result in appropriate progress in light of their circumstances (Yell, 2017).Moreover, when these students are identified, public schools are directed to educate them, to the maximum extent appropriate, in general education settings (IDEA, 2004).Thus, general education classroom teachers must be knowledgeable about instructional strategies that have proven to be effective across the diverse students they teach.

Tiered Intervention Frameworks: Protocols for Presenting Differentiated Instruction
One mechanism that has evolved to assist schools in educating the diverse groups of students who populate general education classrooms is tiered intervention frameworks.These frameworks are labor-intensive, highly-managed protocols whose instructional services are described in terms of tiers (Morse & Russo, in press).Tier 1 consists of the instruction presented in the general education classroom.Tier 2 is remedial instruction, for underachieving students, which supplements the Tier 1 instruction they continue to receive.Finally, Tier 3 involves intensive instruction that often addresses the needs of students receiving special education (Danielson et al., 2019).Altogether, tiered intervention frameworks have been established for several purposes, which include (a) accounting for the academic achievement of every student, (b) matching students to appropriate instruction, and (c) identifying students with disabilities, particularly a specific learning disability (The IRIS Center, 2006).
The IDEA (2004) alluded to the use of one framework, response to intervention (RTI), as a mechanism for (a) presenting early intervening services to underachieving students and (b) identifying students who need to be provided special education services.In fact, the law allows for 15% of the funds associated with implementing it to be used for early intervening services (Yell, 2016).However, since 2004, a paradigm shift has occurred, resulting in RTI being subsumed under the umbrella term multi-tier system of supports (MTSS).
MTSS frameworks, which are supported by the ESSA (Peterson et al., 2019), seek to combine the focus of RTI (i.e., students' academic achievement) and a conceptually similar framework, positive behavioral interventions and supports (PBIS), that addresses students' social behaviors.The reasoning for combining the two is the recognition that a student's academic achievement and engagement in proper social behaviors at school are interrelated, meaning there is a greater probability that a student who engages in proper social behaviors will be more likely to benefit from the academic instruction provided.While references to MTSS, RTI, and PBIS can still be found in the professional literature, given its current pre-eminence, MTSS is used hereafter in this manuscript.
MTSS frameworks are being used with varying degrees of success at all grade levels, preschool-high school (Fuchs et al., 2010;Shepley & Grisham-Brown, 2019).The foundation for an MTSS is its Tier 1 instruction, which is characterized as high-quality instruction in general education classrooms.Broadly defined, this instruction consists of using evidence-based practices to teach an appropriate curriculum (The IRIS Center, 2018).Evidence-based practices are programs or individual instructional strategies that have, across multiple rigorous research studies, proven to be effective for their stated purposes (Cook et al., 2009).

MTSS Mathematics Focus
Unsurprisingly, two core curricula that have received much attention with respect to MTSS frameworks are reading and mathematics (Gersten et al., 2009(Gersten et al., , 2008)).Regarding the teaching of mathematics, its importance to an individual's school and post-school success has been documented (Jiminez & Saunders, 2019;The IRIS Center, 2017).Moreover, the Common Core State Standards have been published as a comprehensive set of mathematics standards for students grades Pre-Kindergarten-12 (Ramirez et al., 2014).
With respect to an MTSS framework and Tier 1 mathematics instruction, teachers need evidence-based practices that have proven to be effective across the range of students in a general education classroom, meaning those with disabilities who receive special education services in addition to the general education students who are not receiving these services.Furthermore, Tier 1 instruction must be evaluated, routinely, for the purposes of ensuring that it proves to be effective with as many students as possible while, at the same time, ensuring that remedial, Tier 2 instruction is provided to a limited number of students who demonstrate a valid need for it because, in many instances, this remedial instruction is in short supply (Lambert, 2022).

Identifying Evidence-Based Practices for Teaching Mathematics to All Students in an MTSS
To date, numerous efforts have resulted in the identification of evidence-based practices for students with disabilities (Ledford et al., 2012;Steinbrenner et al., 2020;Tekin-Iftar et al., 2019).Yet, the professionals involved in these efforts have not extended this work in ways that would align with an MTSS framework.That is to say, research has not addressed the effectiveness of these strategies with respect to students receiving Tier 1 and Tier 2 services.Investigating the use of the aforementioned instructional strategies within these frameworks may result in the identification of strategies that general education teachers can use across all students.Doing so has a number of potential advantages, including (a) aligning the teacher's instruction with the definition for high-quality instruction, (b) meeting the IDEA's stipulation for teachers to use scientifically-based instruction with students with disabilities, and (c) increasing the probability that any one student will be provided effective instruction.
Simultaneous prompting is an instructional strategy that has been established as an evidence-based practice for students with disabilities (Tekin-Iftar et al., 2019).It is a response prompting instructional strategy that consists of the teacher providing a controlling prompt that ensures correct student responding.Specifically, during an instructional session (which is typically referred to as a training session), the teacher identifies the skill the student is to learn and then consistently provides support that will enable the student to perform the skill correctly.The following day, the student is tested to see if they can perform the skill independently and correctly.This teaching-testing cycle continues until the student attains the criterion for mastery (Gibson & Schuster, 1992).
The procedure evolved from other response prompting strategies that consisted of a component that would be considered an example of gradual release.However, when researchers noted that some students progressed directly to independent, correct responding after only receiving maximum teacher support rather than gradual release, the researchers tested the validity of what is now known as simultaneous prompting (Wolery et al., 1992).Particularly noteworthy is how simultaneous prompting's teaching-testing cycle mirrors the use of effective practice procedures for long-term skill maintenance (Morano, 2019).
In their meta-analysis, Tekin-Iftar et al. (2019) remarked that simultaneous prompting has been effective in teaching a broad range of academic skills to a diverse group of students.Hence, Tekin-Iftar et al. identified areas worthy of further investigation for the purposes of more precisely determining (a) the students for whom simultaneous prompting would prove to be effective and (b) the academic content that can be taught with the strategy (Collins, 2012).Tekin-Iftar et al. specifically identified mathematics as a content area for further investigation.
Subsequent to Tekin-Iftar's report, Morse (2023) reviewed the research pertaining to the use of simultaneous prompting to teach mathematics, and concluded that the existing evidence supports it being characterized as a promising practice for teaching mathematics, meaning it is an instructional strategy that has credible research support but insufficient support for characterization as an evidence-based practice (The IRIS Center, 2009).Morse identified 15 studies involving students with disabilities and one study involving general education students demonstrating an academic achievement deficit and, therefore, in need of remedial instruction.The mathematics skills addressed in the 15 studies reviewed included declarative knowledge (e.g., naming numerals and mathematics symbols) as well as procedural knowledge (e.g., solving a linear equation).

Research Problem
In his review, Morse (2023) identified two areas for extending the current research: (a) involve preschool students as participants and (b) explore simultaneous prompting's use within an MTSS framework.Consequently, the present study was conducted to determine whether simultaneous prompting would prove to be effective in teaching a five-step addition algorithm to preschool students receiving Tier 1 services in their current MTSS program.The following research questions were posed.1) Will simultaneous prompting prove to be effective in teaching a five-step addition algorithm to solve three addition basic facts to preschool students receiving Tier 1 services in their current MTSS program?
2) Will the students acquire incidental information (i.e., mathematics terminology) presented within each instructional trial?
3) Will the students maintain, for up to 6 weeks after achieving the criterion for mastery, their ability to perform the five-step algorithm to solve three addition basic facts?4) Will the students generalize their ability to perform the five-step algorithm to solve (a) the three target addition basic facts when the order of the addends is changed (i.e., demonstrate an understanding of the commutative property of addition), (b) three novel addition basic facts presented by the instructor, and (c) three novel addition facts created by the student?

Setting
Screening sessions were conducted in a 1:1 arrangement across two classrooms.In one classroom, the student and instructor sat at a circular table while in the other classroom they sat at a rectangular table.In both instances, they sat side-by-side with their backs facing the routine, ongoing activities in the classroom in an attempt to mitigate possible distractions.
All other sessions were conducted at an oval table in the school's conference room.A 1:1 instructional arrangement was used with the instructor and participant sitting side-by-side.Additionally, in accordance with the school's policy, a staff member was present for the purpose of monitoring the participant's safety.Since reliability data were conducted via Zoom, the three aforementioned individuals were the only ones physically present during baseline, training, maintenance, and generalization sessions.

Participants
Four students, ages 4-5, enrolled in a university-based preschool program participated in the study.They were selected from among a group of eight students who were nominated by their teachers as being candidates for the study because they had progressed in the mathematics curriculum such that they (a) were ready to be taught an addition algorithm and (b) had demonstrated all of the prerequisite mathematics skills delineated below.Table 1 lists relevant demographic information for the four participants, whose real first names have been replaced with pseudonyms.Prior to the commencement of the baseline phase, the participants demonstrated the ability to perform each of the following prerequisite mathematics skills: rote count from 0-20, name the numerals 0-20, count 0-20 objects, select the numeral that stood for a quantity of unit blocks (0-9), and demonstrate equality with quantities from 0-9 (Stein et al., 2017).Additionally, each student demonstrated (a) visual and auditory acuity within normal limits, (b) the ability to mimic a verbal/model prompt, as well as (c) the ability to participate in a tabletop lesson for up to 15 minutes.
The first author served as the instructor throughout each phase of the study.He possessed extensive experience presenting instruction using simultaneous prompting and other response prompting procedures.Moreover, he has authored reports of studies involving response prompting strategies as well as reviews of the peer-reviewed literature pertaining to the strategies.The second author, a mathematics teacher educator, conducted all of the procedural and dependent variable reliability checks.Prior to the study, she had no experience with any response prompting strategies with this student population.

Materials
The materials used to teach the five-step algorithm included foam, base-ten unit blocks, two magnetic dry erase boards (5"×6"), and two sets of magnetic numerals (0-9) and symbols (+, =), which were identical in every respect except that one set was blue and the other red.Some of the mathematics terminology involved shape words, and each word was handwritten, in lowercase letters and black ink, on a 3"x5" solid white index card.
The instructor used a stopwatch app on his cell phone to record the length of each session, and a pencil and paper to record each participant's performance data.

Five-Step Addition Algorithm
Two learning outcomes were measured.One was correctly determining the sum for three basic addition facts-4+2, 0+5, 1+7-while the other was each participant's use of a five-step algorithm, involving a concrete depiction of a part-part-total strategy, to solve each fact.That is to say, a participant could be credited with determining a correct sum while, at the same time, not performing the five-step algorithm properly.However, in order to achieve the criterion for mastery, each participant had to determine the correct sum for each addition fact while also demonstrating the correct use of the five-step algorithm.Thus, the overall criterion for mastery was correctly determining the sum for each fact and simultaneously performing the five-step algorithm across three daily probe sessions.
Several criteria were used to select the addition facts: their sum had to be between 0-9, they consisted of different addends between 0-9, zero was included as an addend in one fact, and the facts differed with respect to whether the greater or lesser addend was listed first.
The five-step algorithm is presented next.
1) Put the correct number of unit blocks under the first or second addend.
2) Put the correct number of unit blocks under the first or second addend, meaning the one for which this task was not completed during Step 1.
3) Make one set of blocks on the right-hand side of the = symbol that consists of the amount of blocks represented by both addends.

4) Count the set of blocks created during
Step 3 and then choose, from the numeral board, the numeral representing that many blocks and place it on the right-hand side of the = symbol for the sum.
5) Check the work by counting all of the blocks on both sides of the = symbol to ensure they were the same.
Noteworthy is that, with only two exceptions, when 0 was an addend, each participant remarked that "zero stands for nothing"-or words to that effect-which served as evidence for performing the step correctly.On the other two occasions, the instructor asked the student why they did not put any unit blocks under the 0, and the participants commented as just described.
The five-step algorithm was partially based on a teaching procedure described by Stein et al. (2017), which required students to draw tic marks and write numerals.Since the participants in this study did not demonstrate the ability to do either task efficiently, base-ten unit blocks were used in place of tic marks and the participants manipulated magnetic numerals instead of writing them.

Mathematics Terminology
Two types of mathematics terminology were presented as incidental information in each trial (Albarran & Sandbank, 2019;Werts et al., 2011).One of three shape words-circle, square, or triangle-was shown on an index card, spelled by the instructor with finger tracking, then read to the participant right before simultaneous prompting was used to teach the five-step algorithm for solving an addition fact.After the fifth step was completed, the instructor pointed to the numerals that functioned as the sum and addends in the equation, and named them as such.

Experimental Design and Procedures
A multiple probe across participants design was employed.In this design, experimental control is established on a case-by-case basis when stable or contra-therapeutic baseline data are followed by training data that indicate an increase in the participant's performance, relative to the baseline data.Specifically, both the level and trend of the data are characterized as being therapeutic (Gast & Ledford, 2014).When this result is confirmed with one participant and then replicated across at least two others, a functional relationship has been established, meaning the independent variable, or intervention, is confirmed as being responsible for the change in the dependent variable (Ledford et al., 2023).In the present study, simultaneous prompting was the intervention that resulted in three participants learning to use a five-step algorithm to solve three addition basic facts.
However, this study involved four participants, one who voluntarily withdrew during her training phase.Thus, in the experimental design reported here, after all four participants completed four or five baseline sessions, training sessions were conducted with the first participant while additional probe sessions were conducted every one to two weeks with each participant who remained in baseline.When a participant achieved the mastery criterion, two to three consecutive baseline probes were conducted with the next participant scheduled to begin training sessions while the other participants remained in baseline, as just described.In each instance, a stable or contra-therapeutic data trend was established during baseline before training sessions began.

Baseline
During every baseline session, each addition basic fact was presented twice, and in random order across sessions.Each of the six trials involved a single opportunity probe, meaning once an error was recorded for one of the five-steps that comprised the algorithm, that step and all subsequent steps were recorded as errors.No feedback was provided following any response, but verbal praise for effort was presented every other trial.
The instructor constructed each addition fact directly in front of the student using the magnetic numerals and then read it aloud while concurrently pointing to each addend as well as the + and = symbols.He then pointed to a group of unit blocks and the numeral board that were positioned on the right-hand side of the addition problem and presented the task directive: "Use these to show me how to solve this problem."The trial was discontinued when the participant either made an error or told the instructor that the participant did not know what to do.In the latter instance, all of the steps not performed were recorded as errors.
Assessment probes during the baseline and training phases measured the participants' acquisition of the incidental information: reading three shape words and naming the addends and sum.Each shape word was presented once at the beginning of each baseline session and the percentage of correct responses for each word was calculated across all baseline sessions.These percentages were then compared to those resulting from the three training sessions which resulted in the student meeting the criterion for mastery.
This same protocol was used for naming the addends and sum, which were assessed at the end of each baseline and training session.Specifically, in baseline a unique, completed addition fact was presented and the participant was asked, "In an addition problem, what are these called?," as the instructor pointed to each.During training, an identical trial was presented but it consisted of the last addition fact completed during the session.

Intervention
At the start of each training session, the participant selected the color of the magnetic numerals and symbols they would use.Each participant was then given a set of base-ten unit blocks and a magnetic dry-erase board with the numerals 0-9 placed on it.Each session, the number of blocks varied as did the position of the numerals on the magnetic board.This same arrangement applied to the second set of materials that was used by the instructor.
The instructor began each training session by securing the participant's attention.Next, the instructor initiated a trial by presenting a shape word card, pointing to each letter while spelling the word, then reading it.
Afterwards, using magnetic numerals as well as the + and = symbols, the instructor constructed the same addition fact in front of himself and in front of the student.He then presented the controlling prompt to demonstrate how to complete the first step of the five-step algorithm while using his unit blocks.The controlling prompt was a verbal-model prompt that involved the instructor explaining what to do while completing the step (e.g., "I create one group by putting a set of blocks, that matches the value for the first numeral, under it.").
After completing the step, the instructor said to the participant, "Now you do it."The instructor presented descriptive, behavior-specific praise following a participant's correct response (e.g., "Good, you put four blocks under the numeral four.")and, following each incorrect response, verbally explained to the student how to redo the step correctly (e.g., "No, you need to put one more block with your group of three blocks because the numeral 4 stands for four blocks.").Thus, training trials were total opportunity trials since the participant completed each step of the algorithm, either independently or after receiving error correction.After the participant completed the final step of the protocol, the instructor pointed to, and named, the addends and sum.
During the inter-trial interval, the instructor recorded the participant's performance data, reset the numeral boards, and put the unit blocks back in their respective piles.Afterwards, he constructed the next addition fact as described above and initiated the next trial.

Assessment: Daily Probes
Following baseline and the first training session, a probe session-training session sequence was followed daily.
At the outset of a probe session, the instructor secured the participant's attention and then presented each shape word by holding its index card in front of the student and presenting the task directive, "Read this word."The participant was given four seconds to respond.No feedback was provided following a correct or incorrect response.Also, no response during the response interval was counted as an incorrect response.After all three words were presented, the instructor presented verbal praise for effort.
Trials for solving the basic addition facts were conducted in the same manner as the baseline trials.Likewise, the baseline procedure was used to assess the participant's acquisition of the terminology for naming the addends and sum.

Maintenance
Maintenance sessions were conducted every two weeks after a participant achieved the criterion for mastery.The procedures followed were the same as those used to conduct baseline sessions.

Generalization
Generalization sessions were conducted within one week after a participant achieved the mastery criterion.The procedures followed were the same as those used during baseline sessions.
Three sets of addition basic facts were presented one time each: (a) the facts taught during the training trials but with their addends reversed (to assess the participant's ability to apply the five-step algorithm to the commutative property of addition); (b) three novel facts created by the investigators in accordance with the same guidelines for creating the targeted facts but with different pairs of addends for the generalization facts (5+4, 1+3, 0+7); and (c) three novel facts created by the participant using the numerals 0-9 (i.e., Kay: 7+4, 3+1, 5+9; Jax: 5+6, 0+8, 1+4; and, Betty: 1+8, 0+2, 5+9).In some instances, this resulted in facts whose sum was greater than 10.In these instances, the numeral board was reconfigured, out of the participant's view, so that not only the correct two-digit sum was available, but also a couple of other two-digit numerals that served as distractors. jel.ccsenet.o

Proced
Procedural condition teacher be 2019), and

Target
For

Incidental Information
While all three participants who achieved mastery also learned to name the addends and sums (i.e., each of their baseline performances 0% correct on all trials whereas each participant scored 100% correct during the trials that resulted in their demonstration of the criterion for mastery), their performances with respect to reading shape words differed.Both Kay and Betty read 0% of these words correctly during baseline trials and 100% (for circle), 67% (for square), and 100% (for triangle) of them correctly during mastery trials.Jax read 0% of these words correctly during baseline trials and 33% of each of the words correctly during mastery trials.Table 5 summarizes these data.

Maintenance
Two of these same participants, Kay and Betty, demonstrated 100% accuracy on all of the maintenance trials conducted two-six weeks after they achieved the mastery criterion.Kay completed maintenance trials two, four, and six weeks after mastery, and Betty two weeks.Jax demonstrated 90% accuracy performing the five-step algorithm and determined the correct sum on five of six trials during the two-week maintenance session, then performed with 100% accuracy on all trials during the four-week session.Jax and Betty could not complete additional maintenance trials due to the end of the school year.

Generalization
Kay and Jax performed with 100% accuracy across all generalization trials.Betty did the same with one exception: she did not properly execute the five-step algorithm to solve one of the facts she created: 0+2.

Social Validity
The participants' two teachers completed a 4-item survey designed to measure their satisfaction with the intervention, the study's focus on teaching a five-step addition algorithm, and the outcomes realized by the participants (Wolf, 1978).In response to each item, both teachers selected the highest rating, from among the five options presented, indicating they were very satisfied with the instructional strategy, the study's focus on solving addition basic facts, and all of the outcomes the participants realized.

Discussion
Simultaneous prompting proved to be effective in teaching three participants how to execute a five-step algorithm to solve three addition basic facts.Importantly, the participants' use of base-ten unit blocks, which served as a concrete depiction of the part-part-total approach to solving an addition basic fact, resulted in their demonstration of one type of conceptual understanding of addition.Altogether, the three demonstrations of effect provide solid evidence of the effectiveness of the simultaneous prompting procedure in teaching the five-step algorithm to students receiving Tier 1 services in their school's MTSS program, which was the focus of this study (Ledford et al., 2023).
Moreover, the three participants demonstrated high accuracy on measures of maintenance and generalization.Particularly noteworthy is that each participant demonstrated generalization across three different sets of addition basic facts.By doing so, each participant extended their demonstration of their conceptual understanding of one way to solve addition basic facts, which is markedly different from rote responding, with the sum, to the presentation of an addition fact represented by abstract symbols.To date, the other study that investigated teaching students to solve basic facts simply had students engage in rote responding with the correct answer (see Drevon and Reynolds's (2018) approach to employing simultaneous prompting to solve multiplication facts).
Analyses of trials and time to criterion revealed consistency across the three participants who achieved mastery.Error analysis in terms of the participants' percentages of training errors indicated that simultaneous prompting proved to be an instructional strategy that resulted in near errorless learning.Noteworthy is that these data were collected even though the controlling prompt was presented multiple times during each trial.Conversely, in most mathematics studies involving simultaneous prompting, the percentage of participant errors during training trials has not been reported (Morse, 2023) in spite of the fact that this outcome is touted to be a primary reason for using this strategy with certain students, such as those with disabilities who may not benefit from making errors (Collins et al., 2018).
Error analysis with respect to procedural and dependent variable reliability indicates the two most frequent procedural errors were failing to present (a) incidental information (4 times) and (b) descriptive verbal praise following a correct response (3 times), whereas the steps within the algorithm that resulted in the highest percentage of participant errors were the last three steps: combining both groups of addends, counting the combined group and identifying the sum, and checking the work.These high error percentages continued, to varying degrees, across each of the participants as they demonstrated mastery of the different steps within the algorithm and were asked "Is there anything else you need to do?" when they did not execute the next step in the algorithm, which indicate the percentages were not due solely to the use of single opportunity probes.
Rather, the high percentage of errors on the steps in the middle and second half of the algorithm suggest that each participant's working memory may have been overtaxed (Riccomini & Morano, 2019).Perhaps once a participant transferred the algorithm's first few steps to long-term memory, they then were able to devote more of their cognitive capacity to figuring out the additional steps they had to perform.
The participants' average percentage of errors during daily probe trials was high (20%), and may have been an outcome of using single opportunity probes.Nonetheless, a high error rate during daily probes has been a noteworthy feature of the simultaneous prompting procedure, and has led to calls for modifying it to address this matter in light of the fact this response prompting procedure is characterized as a near errorless learning strategy (Gibson & Schuster, 1992).Yet, explaining how simultaneous prompting operates, in certain contexts, with respect to two other evidence-supported strategies may result in a plausible, acceptable explanation for why such an error rate occurs.
In this study, the intervention served as a type of worked solutions strategy without a fading component (Ricommini & Morano, 2019), and the daily probe sessions served as a type of distributed, retrieval practice (Hughes & Lee, 2019).Retrieval practice is an effective independent practice strategy for not only enabling students to acquire new content but also retain it (Morano, 2019).Considering the acquisition, maintenance, and generalization outcomes that were realized in this study, the data reported may simply reflect the process students must go through in order to demonstrate these laudable results.Additionally, there is evidence which supports the finding that some students with disabilities learn from their errors (Leaf et al., 2011;Leaf et al., 2010), and this is an important consideration when exploring how to expand the use of evidence-based instructional strategies that have been designed to address concerns pertaining to the impact of errors on the learning of students with disabilities (Collins et al., 2018).Specifically, there is a need to investigate the use of these strategies with the wider population of students being taught in different tiers in an MTSS framework.
Regarding the participant who voluntarily withdrew from the study, noteworthy is the average length of time for her training and probe sessions.Whereas the average time per training session across the participants who achieved mastery was 10 minutes, 49 seconds, and daily probe sessions was 7 minutes, 31 seconds, these times were 14 minutes, 16 seconds and 12 minutes, 20 seconds for the participant who withdrew.She may have reached a point in time where her fatigue, combined with her expressed angst about her mother being out of town, resulted in her decision to withdraw from the study.

Conclusion
In two respects, this study's results extend those of previous research that has investigated the effectiveness of teaching mathematics skills with the simultaneous prompting procedure.First, this study involved a new mathematics skill and, second, the participants did not have a disability nor did they demonstrate an academic achievement deficit.The latter circumstance has been the case in all previous, related research.Thus, this study is particularly germane to practitioners because it provides evidence for the use of an instructional strategy that can be employed across the range of students receiving instruction in a general education classroom.This is particularly relevant in schools in the United States that use an MTSS protocol and investigate ways to make Tier 1 instruction effective for as many students as possible.
Another important outcome from this study that is germane to practitioners is that simultaneous prompting proved to be a near errorless learning strategy that has an effective retrieval practice component.This component explains the participants' demonstration of skill maintenance.Likewise, the participants applied the five-step algorithm to three sets of unique addition basic facts.Hence, this study shows practitioners a way to concurrently address three of the four phases of learning: acquisition, maintenance, and generalization (Alberto & Troutman, 2017).The study also provides practitioners with an example of how to increase instructional efficiency by systematically including mathematics terminology as incidental information during an instructional trial.
The study's limitations have implications for conducting further research which will likely lead to findings that direct practitioners in their work.First, research needs to be conducted in which a group arrangement is used rather than the 1:1 instructional arrangement in the current study.Group arrangements predominate in each tier of an MTSS framework and tend to be the arrangement used in public schools in the United States due to the instructional resources that are available.Moreover, these arrangements can further increase instructional efficiency through observational learning.Therefore, research that involves group arrangements could produce meaningful results for practitioners.
Second, research should involve a classroom teacher as the instructor rather than a researcher with extensive experience using the simultaneous prompting procedure.In this study, experimental control was strengthened because a very experienced researcher implemented the procedure with high fidelity to participants who did not have any experience with the procedure.Yet, this arrangement takes away from the study's external validity because most teachers do not possess the researcher's expertise.
Third, future research also needs to investigate the inclusion of students who have been identified within an MTSS framework as being at risk for academic failure since, to date, only one mathematics study has investigated the use of simultaneous prompting with these students (Drevon & Reynolds, 2018).Such future research would extend investigations of simultaneous prompting's effectiveness in teaching mathematics to students receiving Tier 2 services.Specifically, a noteworthy topic would be researching ways to adapt the procedure for these students since doing so might ultimately lead to identifying how the procedure could be made effective for use across all students in a general education classroom, which is a hoped-for outcome in an MTSS framework.
Fourth, future research should not only address mathematics skills beyond those that have already been investigated, but also different algorithms for solving addition basic facts.An example would be investigating a counting-on addition algorithm.
Altogether, this study provides support for the use of one evidence-based practice to teach a mathematics skill across the diverse student population that exists in many general education classrooms in schools in the United States.Furthermore, both the study's outcomes and limitations set the occasion for future research that would address the same focus as that of the present study.The results from this future research have a reasonable probability of providing practitioners with ideas for extending their repertoire of existing, evidence-based procedures for presenting differentiated mathematics instruction.

Table 1 .
Participant demographicsBased on their academic achievement data, each student was being provided only Tier 1 services in the school's MTSS framework.While the students attended the program for a full day, formal academic instruction was only presented during the morning.The school's curriculum addressed the reading, writing, and mathematics standards from the state's Early Learning and Development Standards -4 Years Old to Kindergarten.None of the participants had any prior experience with simultaneous prompting or any other response prompting strategy.

Table 3 .
Number of training errors per step of the five-step Algorithm

Table 4 .
Number of daily probe errors per step of the five-step Algorithm

Table 5 .
Acquisition of incidental information