The Relationship Between Problem Posing and Problem Solving: A Systematic Review

Problem posing research in mathematics education has increased in the last decade. As a result, the intent of the present study is to examine and summarize the research that has been conducted. Specifically, we examined articles published from 2011–2020 that studied the impact of problem-posing instruction on students’ problem-solving abilities. In total, seven articles were included in the systematic review. We concluded that the effect of problem-posing instruction on students’ problem-solving skills is varied, though prior research on the relationship between posing and solving demonstrates a positive impact. Thus, while there remains a great need for further research in this area, there are indications that problem-posing could improve students’ problem-solving abilities.


Background
Problem-posing instruction has slowly evolved over time within mathematics education. In 1983, Brown and Walter released their book The Art of Problem Posing. This book detailed the importance of problem posing-based instruction in mathematics, debuting the "What if not?" approach. Silver (1994) increased the popularity of this approach, emphasizing it as a way of allowing students to be creative in the classroom. Though it has been nearly 40 years since the introduction of problem posing, new benefits and outlets of incorporating problem posing are still being discovered by educational researchers across the globe. Rosli et al. (2014) published a meta-analysis of research at the time to provide a comprehensive lens on problem posing. However, since the meta-analysis' publication, a multitude of studies examining the effectiveness of problem posing have been completed. The purpose of the present study is to examine the research that has been published since 2011 (i.e., end date of the previously published meta-analysis) on the impact of problem posing in mathematics education.

Problem Posing
Problem-posing instruction is an approach to mathematics that comes in many forms. One method is to have students generate their own word problems (Silver, 1994). This can be completed with varying levels of structure, such as providing explicit equations or simply asking students to create their own story (Stoyanova & Ellerton, 1996). This can also be done through several techniques, such as using games (Kalmpourtzis, 2019), manipulatives (Rosli et al., 2015), or real-life applications (Calabrese et al., 2019). In addition to creating entirely new problems, students can modify existing problems (Silver, 1994). An example of this is the Brown and Walter (1983) "What if not?" approach, in which students analyze and modify problems to create their own. Problem posing is a versatile technique that can be introduced in many ways.
Many advantages that engaging in problem posing offers have been supported by multiple studies. For instance, researchers have argued that problem-posing instruction has the potential to increase students' mathematics achievement (e.g., Cankoy, 2014;Chang et al., 2011;Silver & Cai, 1996). Similarly, several researchers have stated that problem posing can be used as a holistic way of assessing students' understanding of concepts (e.g., Drake & Barlow, 2008;English, 1997;Lowrie & Whitland, 2000;Smith & Smith, 2006). These are only a few examples of the encouraging research that has been associated with problem-posing instruction.

Problem Solving
Problem solving is traditionally considered an essential part of mathematics curriculum (Brahier, 2020). However, the results of recent studies suggest that problem solving is also important to many other subjects, such as engineering (Nordstrom & Korpelainen, 2011), physics (Bassok & Novick, 2012), and English language arts (Common Core State Standards Initiative [CCSSI], 2010). Polya (1973) argued that problem solving is a fundamental goal of life itself. Thus, it logically follows that the CCSSI (2022) described problem solving as "required for success in college, career, and life." Unfortunately, assessments have indicated that problem solving is an underdeveloped skill in education (Greiff et al., 2014). Out of the 44 countries/economies that assessed problem solving on the 2012 Programme for International Student Assessment, 20 scored statistically significantly below the Organisation for Economic Co-operation and Development (OECD) average. Additionally, in regards to the six levels of proficiency (1 being the lowest), only 57% of students scored a Level 3 or higher (OECD, 2014). Therefore, it is necessary to continue researching and developing students' problem-solving skills.
Despite the breadth and long history of research on mathematical problem posing, one of the earliest discoveries, its connection to problem solving, is also one the most heavily researched. Silver and Cai (1996) found that students' problem-posing ability is connected to their problem-solving ability, which is widely supported by additional research (e.g., El Sayed, 2002). According to Mamona-Downs and Downs (2005), "problem solving and problem posing have complementary roles… the roles cannot be neatly separated, and in particular, problem posing is an intrinsic part of problem solving" (p. 391). Because so many researchers have noted this connection, it would prove beneficial to examine the relationship the between problem posing and problem solving. Specifically, the present study aims to answer the following research questions: 1) What quantitative articles have been published on the impact of problem-posing instruction on students' problem-solving abilities?
2) How does problem-posing instruction impact students' problem-solving abilities?

Search and Selection Procedures
The purpose of this study was to complete an updated review of the literature since Rosli et al.'s (2014) publication. More specifically, our intent was to further examine the connection between problem posing and problem solving by synthesizing existing literature published since 2011, the most recent year of publication included in Rosli et al.'s study. To accomplish this, the researchers conducted a thorough search and selection procedure and coding process to extract irrelevant articles before analysis.

Initial Search
Researchers used six electronic databases to systematically search for relevant articles: Education Source, Education Full Text, ERIC, Teacher Reference Center, APA PsychInfo, and OpenDissertations. The initial search contained the phrase "problem posing" along with the search terms "mathematics OR math OR maths" AND "elementary OR primary." Search results were limited to articles published between years 2011-2020. This was in response to the fact that Rosli et al. (2014) published their meta-analysis on the impact of problem posing in 2014, with the most recent articles published in 2011. Starting the search for the present study in 2011 narrowed the search to only articles that were published after those included in the study by Rosli et al. (2014). The overall search resulted in 782 articles from the six electronic databases.

Article Selection and Coding
Using Rayyan (Ouzzani et al., 2016), two researchers assessed the 782 articles for relevance under certain criteria. First, any duplicates in the initial search were removed (n = 22 deleted). Second, articles needed to be written in English (n = 1 deleted). Third, articles needed to meet specific requirements regarding participants, original research, and study focus. Participants in the study needed to be students; studies utilizing teachers or preservice teachers as participants were excluded (n = 220 deleted). All included articles also needed to contain research rather than exclusively theory or literature (n = 94 deleted). For an article to have adequate study focus, the research needed to be on problem posing as described in the literature review portion of this study (i.e., containing evidence that participants were creating or modifying problems or questions; n = 387 deleted). After applying the aforementioned inclusion criteria, we retained 58 articles.
The coding process was completed using a Google Forms file containing the following study characteristics: article title, author(s), publication date, research paradigm (qualitative/quantitative/mixed), participants, ies.ccsenet.org International Vol. 15, No. 4; treatment/intervention, outcomes measured, study design, and quantifiable data (e.g., means, standard deviations, sample sizes). To begin the initial coding, two researchers coded five articles and compared responses to establish interrater agreement. After discussion and agreement on all aspect of the coding sheet, the remaining 53 articles were split between the two researchers for full coding.
First, all qualitative articles (n = 31) were removed, as qualitative data can only describe impact, not measure effect. In other words, articles for this study needed to include quantitative data that could be used to calculate a Hedges' g effect size, such as means, standard deviations, sample sizes, or other sample statistics. Additionally, as part of the criteria, the study needed to have either pretest and posttest data or posttest-only data for multi-group comparison. Four additional articles were removed due to this criterion.
The researchers further sorted the remaining 27 articles by their measured outcomes. Due to the broad categorization of the outcomes, we grouped the outcomes by overarching themes, including creativity, efficacy and attitude, problem-posing ability, and problem-solving ability. The category with the highest frequency was problem solving. Therefore, the researchers chose to proceed with only the eleven articles focusing on problem solving so as to not compare different measured outcomes (see Figure 1). At the end of the coding process, seven quantitative studies on the relationship between problem posing and problem solving remained (see Table 1 for primary study details).

Categorical Analysis
The researchers recorded design and demographic details for each study. This included single test or pretest-posttest design, intervention type, content area focus, outcomes measured, location, grade level, and number of groups with respective sizes. Any measured outcomes that did not directly pertain to problem posing and problem solving were discarded. If a study contained comparisons of the same outcome across different combinations of groups (e.g., Akben, 2020) or different outcomes across the same groups (e.g., Kapur, 2018), the researchers noted this and recorded each set of data separately.

Statistical Analysis
Due to the small number of articles about problem posing and problem solving, as well as the multivariate nature of many of the articles, the researchers deemed that a full meta-analysis would not be appropriate for the present study. To amalgamate the recent research on the relationship between problem posing and problem solving, the researchers examined the existing data to interpret problem posing's measured effect on students' problem-solving skills via systematic review.
Researchers computed unbiased effect sizes and their respective sample unbiased variances for each of the included studies. All calculations were completed using Microsoft Excel 2016. First, using data provided in each study, the researchers for the present study calculated Hedges' g, its associated sample variance, and 95% normal confidence intervals using DeFife's (2009) Effect Size Calculator in Microsoft Excel. In each case, the experimental groups, or groups prominently utilizing problem-posing instruction, were labeled as Group 1, and the control groups, or comparative groups, were labeled as Group 2. Next, the researchers calculated the unbiased variance for each Hedges' g (Hedges, 1981).
For consistency, researchers treated all studies as posttest-only control group designs. In the instances where studies had pretests and posttests, the researchers only computed mean differences between the independent groups rather than mean changes between any matched pairs. This procedure was also done in the interest of examining the impact of problem posing instruction on problem solving ability by explicitly comparing interventions rather than monitoring growth.

Statistical Analysis
The effect sizes for the selected studies varied greatly (see Table 2). The smallest effect size was g = -0.648 (var = 0.042; Yu & Chen, 2014), which in this case indicated that students who both generated questions and solved questions created by their teachers outperformed those who only generated questions. The largest effect size was g = 1.386 (var = 0.079; Akben, 2020), indicating that chemistry students who participated in both problem-solving and problem-posing activities outperformed those who only participated in problem-solving activities. Of the 16 effect sizes calculated from the seven studies, four were negative (Kapur, 2018;Kopparla et al., 2019;Yu & Chen, 2014) and 12 were positive. In other words, 75% of the selected studies associated problem-posing instruction with a positive impact on problem-solving performance. Furthermore, variances of the studies ranged from 0.042 (Yu & Chen, 2014) to 0.106 (Akben, 2020). In terms of statistical significance, only one (Kapur, 2018) of the 12 calculated 95% confidence intervals did not contain zero. Therefore, the only instance of statistical significance occurred in regard to measuring the impact of problem-posing instruction on students' conceptual knowledge of mathematics content. Note. CI = Confidence Interval.

Discussion
The purpose of this study was to provide an update to the Rosli et al. (2014) meta-analysis on problem posing as well as a general overview of the research that exists on the connection between problem posing and problem solving. By providing quantitative data from each of the included studies, practitioners and researchers alike can gain insight as to how this connection is being measured and in what context. Using seven articles, the researchers demonstrated that problem posing has been used as early as the elementary level (e.g., Kopparla et al., 2019) through the university level (Akben, 2020), in multiple countries (e.g., Akben, 2020;Kapur, 2018;Yu & Chen, 2014), and has been measured in many ways with a range of reported effects.
Though the total number of selected articles is relatively small, there was still a wide variety of contexts in which problem-posing instruction was measured. It is important to note that some of the research articles contained fewer details on the implemented interventions, thus making it difficult to compare the similarities and differences among studies. Additionally, while most of the selected articles were specific to mathematical problem solving (e.g., Kopparla et al., 2019), some of the articles focused on a broader definition of problem solving as an academic skill (e.g., Yu & Chen, 2014). Regardless of application, each study referenced problem/question generation and a possible connection to students' abilities to solve problems.
The present study demonstrates that there is some variation among research on the impact of problem-posing instruction on students' problem-solving abilities. Many of the effect sizes were positive, indicating that in most cases, students who were exposed to problem-posing instruction performed better than students who did not. Additionally, in three of the four cases where the effect size was negative (Kapur, 2018;Kopparla et al., 2019), the magnitude was less than 0.1, indicating that although the effect was negative, it was negligible by most researchers' standards for effect sizes. Furthermore, in each of these cases, the 95% confidence intervals showed a lack of statistical significance, further suggesting that the results do not indicate that problem-posing instruction has a negative impact on students' problem-solving ability. On a similar note, nine of the 12 positive effect sizes (Akben, 2020;Chang et al., 2011Kapur, 2018Suarsana et al., 2019;Yu & Chen;2014) were above 0.5, indicating a large effect in favor of problem-posing instruction. However, while these effect sizes were larger, only one instance showed statistical significance (Kapur, 2018). Lastly, all effect-size variances were small, indicating little variability in each dataset and greater precision of each measured effect (Turner & Bernard, 2006). As the results are not entirely consistent, there is still ample room for research to continue to improve upon these results. For instance, it might be beneficial to replicate studies in which large effects were shown without statistical significance (e.g., Akben, 2020;Suarsana et al., 2019). Additionally, it would be beneficial to examine the different aspects of problem solving, as seen through Kapur's (2018) research, to further determine if problem-posing instruction may impact these aspects differently.

Implications and Conclusion
Although the results of recent research on the impact of problem-posing instruction on students' problem-solving skills are varied, there is still much to observe. The combination of these results indicates that there is still evidence in support of Silver and Cai's (1996) argument that problem-posing instruction has a positive impact on problem-solving abilities. At the very least, there does not appear to be any substantial evidence suggesting that problem-posing instruction has a negative impact. Therefore, for teachers looking to at least update their instructional approach and introduce more recent methods, problem posing may be a beneficial strategy for improving students' problem-solving ability. ies.ccsenet.org International Education Studies Vol. 15, No. 4;