Elementary Schools Working as Professional Learning Communities: Effects on Student Learning

The professional learning community (PLC) is considered to be an effective school improvement strategy centered on student achievement. The goal of this study was to introduce the PLC approach in a few public elementary schools in Cameroon to evaluate the causal impact of this organizational model on student learning. A quasi-experimental approach was used involving an experimental group and a control group. Student preand post-tests were administered in two core subjects (French and mathematics) at both the beginning and the end of the first year of operation as a PLC. Our findings show a significant improvement in the students’ results between the preand post-test. The PLC was qualified as being in its initiation stage of development, when members focus on their students’ outcomes and collectively engage in solving the latter’s learning-related difficulties.


Introduction
The Program on the Analysis of Education System (PASEC, 2016) under the authority of the conference of education ministers of French speaking countries revealed that in ten countries south of the Sahara (including Cameroon) who participated in this program to evaluate student learning in elementary schools, the verdict was alarming, with disappointing low student outcomes in reading and mathematics after at least six years of schooling. Among other conclusions, the report urged the governments and education leaders of these countries to identify and to implement more effective organizational strategies for their schools to improve their students' academic results (PASEC, 2016;De Ketele, 2016).
In the case of Cameroon, this report was not surprising because in 2013, this country's ten-year official evaluation of its education system (RENSEN) recorded discouraging student performance outcomes, with a quarter of elementary students repeating and results that were weaker in public schools than in private-sector institutions. Despite the fact that the students' PASEC test scores in Cameroon were rated slightly above average (69.3% compared to 64.5% in the other countries tested), the situation pointed to a definite lack of internal efficacy within the existing education system (DSSE, 2013).
In light of these revelations, we deemed it of interest to launch a pilot study based on the principles of the Professional Learning Community (PLC) approach to support and improve student achievement. This organizational model for schools centers on collaboration and the undertaking of group activities and reflection for the continuous growth and improvement of student learning (Roy & Hord, 2006;Bouchamma, Basque, Giguère, & April, 2019;Eaker & Keating, 2008). In this perspective, schools are encouraged to initiate PLCs (Vescio, Ross, & Adams, 2008;Fullan, 2004;Hord & Sommers, 2008). The PLC model provides schools with several advantages: winning strategies to enhance student learning (Muñoz & Branham, 2016;Tam, 2015;Louis, 2006;Lomos, Hofman, & Bosker, 2011;Hattie, 2008); an effective platform to supervise and guide teachers in their professional development (Bouchamma & Michaud, 2014;Doğan & Adams, 2018;Wood, 2007); a framework that encourages discussion, sharing, and support between teachers (DuFour, DuFour, Eaker, & Many, 2006); and an overall focus on student achievement (McLaughlin & Talbert, 2010;Dufour, 2007;Louis & Marks, 1998). remains non-existent in developing countries such as those south of the Sahara, including Cameroon, where schools remain entirely dependent on an archaic and strongly centralized system. This article presents the results of a pilot study examining a PLC model introduced in a few elementary schools in the Diamaré region of Far Northern Cameroon and the effects of this educational approach on student outcomes.

Literature Review
Studies have shown the PLC to be an effective work strategy that ultimately improves student achievement by bringing about change within the school (Dufour & Eaker, 1998;Stoll, Bolam, McMahon, Wallace, & Thomas, 2006). PLCs in the school setting also help to break the feeling of isolation experienced by many teachers by creating opportunities for collaboration and mutual support that nurture growth and the adoption of new teaching strategies (De Neve & Devos, 2017;Roy & Hord, 2006). This work method has also been found to improve the overall teaching experience, as is evidenced by a greater level of personal satisfaction and well-being among teachers (Hord & Sommers, 2008) and a greater sense of self-efficacy through professional development activities (Andrews & Lewis, 2002;Watson, 2014;Tam, 2015). By creating an environment that favors discussion and collaboration within the school, the PLC makes it possible to better monitor and support student achievement (Bouchamma et al., 2019;DuFour, Dufour, & Eaker, 2008;Dufour, 2004;Carpenter, 2018).

Characteristics of a School PLC
Studies on the characteristics of PLCs in the school setting are numerous and varied. Schussler (2003) identified three main groups of characteristics: the first group houses such critical elements as reflective dialogue, a common orientation toward student learning, collaboration between the members of the school team, the sharing of experiences, a shared vision, and common values and standards; the second group regards the physical aspects such as the time and space required to hold the collaborative meetings, and the availability of material resources; the third group refers to the human and social resources which involve mutual respect and trust between the members of the school team and are crucial components for collaboration and the sharing of leadership. These same characteristics are also evidenced in other studies (Roy & Hord, 2006;Hord & Sommers, 2008;DuFour, DuFour, Eaker, & Many, 2006).

Stages of Development of a School PLC
Inspired by the work of Fullan (2000) on the process of introducing reforms in schools, Huffman and Hipp (2003) conducted a five-year longitudinal study (1995)(1996)(1997)(1998)(1999)(2000) on the introduction and development of PLCs in the school setting and identified three stages of development of this type of educational approach: initiation (Stage 1), implementation (Stage 2), and integration (Stage 3), corresponding to the different growth periods of the PLC toward maturity and full sustainment. Leclerc, Moreau, and Lépine (2009) also examined these three growth phases of the PLC in an education system. These authors equate the first level or initiation stage with the moment when the school principal and their team decide to introduce a PLC within their school and commit to following the principles of this approach. At first, the school team's priorities and vision lack clarity; the principal makes all of the decisions; collaboration and the sharing of experiences is laborious; and the members have doubts and concerns regarding teaching reforms, how these changes will take place, and how using the new strategies will affect their students' progress (Leclerc, Moreau, & Lépine, 2009;Sompong, Erawan, & Sudham-Dharm, 2015).
At Stage 2 or the implementation stage, the school's vision becomes clearer and is shared by every member of the school team; the new work conditions promote collaboration and the sharing of experiences between teachers; the principal's leadership has evolved into a sharing of responsibilities; and the teachers have adopted proven student-centered practices that have a greater impact on their students' results (DuFour, Dufour, & Eaker, 2008).
The final stage, integration, referred to as institutionalization by Huffman and Hipp (2003), is characterized by the effective application of the PLC principles within the school: a clearly defined common vision and shared common values, shared leadership, shared pedagogical practices and winning teaching methods that improve student outcomes, and effective collaboration between the school team's members. It is during this stage of development that the PLC reaches its full potential for action (Hord, 2008;Graham & Ferriter, 2008).
In their four-year longitudinal study on high school PLCs in Taiwan, Peiying and Wang (2015) examined the necessary processes a PLC must go through to pass from one stage to the next. These authors found that in the normal progression of a PLC, the passage from initiation to implementation took on average six to seven months or the equivalent of an entire school year, while implementation took approximately another year before reaching full integration and institutionalization, which only began to bear fruit in the beginning of the third year of  Thompson, Gregg, and Niska (2004) conducted a mixed study with teachers and principals of six low-achieving elementary schools to examine the impact of their PLC on student achievement. Their findings show that the five principles of the learning community, as stated by Senge (1990), namely, self-control, mental representations, a shared vision, group learning, and systemic thinking within a PLC had a positive impact on student learning and enabled the participants to merge their strengths and focus their collective energy on improving their students' outcomes.
In a study launched by the Conseil scolaire de district catholique centre-sud in Ontario, Canada on the effect of the PLC method on student outcomes in reading in eight francophone schools, Leclerc and Moreau (2010) followed the progress in reading of three groups of students during three consecutive years (2007)(2008)(2009)(2010). Using student performance data gathered from the province's standardized tests and individual student observations in reading, the authors found that the average scores in reading significantly improved in two groups of students: at-risk students (3.09 pre-test to 7.33 post-test; for an average difference of 4.24 points); and regular students (2.80 pre-test to 6.74 post-test, a difference of 3.94 points). The authors attributed this noticeable progression in the students' test scores to the changes in teaching practices discussed during the teachers' collaborative sessions within the PLC. Also concluded was that schools who functioned as PLCs encouraged teachers to better focus their interventions with all of their students, including those who are learning-challenged.
In their ground-breaking study in Great Britain, Bolam and colleagues (2005) identified the characteristics of PLCs having the greatest impact on both teacher and student performance. The authors administered questionnaires to 2300 kindergarten, elementary, secondary, and special education schools, and using 16 case studies between 2002 et 2004 found a definite correlation between the PLC principles and the students' results in these schools. Indeed, when the authors used value-added models in their analyses and compared the student outcomes in PLC-organized schools and those without PLCs, they observed a positive difference. Bolam et al. (2005) confirmed that difference observed was due to the effect of working as a PLC and that the more teachers were involved in the PLC process, the better were their students' results.

Method
A quasi-experimental approach was used in this study. Questions pertaining to validity and to the prerequisite ethical aspects were considered, such as unit of analysis, length of intervention, selection of the different participating groups (experimental and control), contamination effect within the school, the reference to which the experimental group was compared, etc. (Creemers, Kyriakides, & Sammons, 2010).
We chose the difference in difference method (Coady, Kosali & Ricardo, 2018) to analyse the students' pre-and post-test scores following the CAP initiative. Table 1 summarizes this analysis.
where X T = the average pre-test score for the students in the experimental group; X T+1 = the average post-test score for the students in the experimental group; Y T = the average pre-test score for the students in the control group; and Y T+1 = the average post-test score for the students in the control group.
This procedure enabled us to neutralize the initial differences between the two groups of students to more precisely determine the impact of the PLC. Bertrand, Duflo, and Mullainathan (2004)  agreeing to participate signed a written consent form and were free to cease their participation at any time.
Because of the pilot nature of this study and the significant interest generated, all of the participating schools remained until the completion of the study.
The students' parents were informed that their child's standardized performance test scores from both the beginning and the end of the year would be analyzed; each parent was free to accept or refuse that their child's results be used for analysis.

Research Objective
The goal of the study was to organize a group of elementary schools into PLCs and to evaluate the effects of this teaching approach on student learning.

Samples
A multilevel sample method was established (Levy & Lemeshow, 1991) for this study. Through a network of general announcements used as the preferred method of information and communication, the sample composed of the teachers was constituted on a volunteer basis with a signed consent form describing the proposed activities requiring their participation, while the sample composed of students was established by means of a student census (Lavrakas, 2008;Henry, 1990).
The experimental group was composed of six schools from the same campus operating on a weekly half-time schedule in a rotating manner, with one group attending school in the morning and the other in the afternoon. The control group consisted of schools from a different campus, with the obtained permission of their principals to use their students' test results on the same standardized student performance tests for comparison. This method automatically enabled us to compare both groups without contamination.

Participants
The study was conducted in the Diamaré region of Cameroon's Far North province. The experimental group was composed of six elementary schools whose teachers (N = 48) worked in two PLCs with a total of 3065 students in grades 3 through 6.
Between November 2018 and June 2019, the teachers held three-hour collaborative meetings (a total of ten sessions) to discuss their students' results in mathematics and French and to learn new pedagogical strategies to not only improve their teaching practices and their students' outcomes, but also help them use student performance data in the group's decision-making processes.
The control group was composed of 976 students from four schools on another campus in the city where the PLC approach was not introduced. The early and year-end results of these students and those of students from the same grade levels in the experimental group (N = 3065) were compared. The results of the students of each participating teacher were all used for analysis. The data pertaining to students who participated in the pre-test but were absent at the post-test (and vice versa) were removed prior to analysis (109 observations). Table 2 presents the total sample under study. Most of the participants were contract teachers (73.53%) and more than half were women (67.65%). The government-appointed employees had the most teaching experience.

Experiment
Proper supervision of the established PLCs was ensured for both groups during the experimentation phase.

Establishing the PLCs
The process by which the PLCs were established in the schools followed the action research approach (Susman & Evered, 1978;Kemmis, McTaggart, & Nixon, 2014) and the following series of actions: identify the problem; plan actions or study various action strategies to solve the problem; and execute the actions and evaluate or study the effects of these strategies.

a) Identifying the problem
In light of the students' alarming test results at the beginning of the school year, we met with the teachers and principals of each group to expose this serious situation and identify the causes of these disappointing outcomes.
Using the priority list technique developed by Chevalier and Buckles (2009), the participants were asked to identify what they could do and what they wanted to see happen in their school to improve the students' outcomes. This phase enabled us to introduce the PLC approach as an effective work method to enhance student achievement.

b) Planning
At this stage, a two-day training workshop was organized for teachers and principals to present the major themes of the PLC: definition, principles, objectives, and rules and conditions (collaboration and teamwork; member commitment; mutual support, shared responsibilities among the members; an established mission and vision of the goals to be pursued; use of relevant data; shared leadership; collaborative sessions; and the development of a clear school improvement plan) to facilitate PLC operations within a school (Dufour & Eaker, 2004;Hord & Sommers, 2008;Roy & Hord, 2006;Bouchamma et al., 2019).
At the end of each workshop, the presented training modules were given to those participants who had completed a test to determine their understanding of the notions discussed. All misunderstood notions were reviewed by the project facilitators.

c) Taking action
This stage of the process focused on the application of aspects relative to a change of culture within the school to transform it into a PLC and the deployment of the actions identified in the intervention plan: an established mission; a vision; common values and objectives for the school; organized collaborative meetings; the elaboration of a well-defined school improvement plan, etc. (Eaker, DuFour, & Dufour, 2004).
The participants identified the actions that were likely to improve their students' results and determined a mission, a vision, and the goals to be pursued regarding student learning. Defining improvement objectives led them to develop a clear plan for each identified objective that outlined the activities and action strategies to be undertaken, the persons responsible for their implementation, as well as a calendar. ies.ccsenet.org International Education Studies Vol. 13, No. 6; 2020 6 d) Evaluating/studying the effects of the action program During the 10 collaborative meetings, the teachers shared their experiences and listened to their colleagues describe which methods should be used to improve their students' results. They discussed, shared, and appreciated the observed changes and were asked to evaluate each action proposed in the previous session, the positive and the negative aspects of their application, and any new propositions.

Supervision of the PLC
We regularly participated in the collaborative sessions and used a checklist to ensure that the principles, rules, and conditions of the school PLC were being appropriately applied. Following each meeting, a report summarizing the subjects discussed and the proposed actions was prepared and was validated by the participants, and the agenda of the following collaborative sessions was determined based on the actions undertaken and those to come.

Data Collection
During the 2018-2019 school year, pre-test/post-test evaluations in mathematics and French were administered to the students in the PLC schools in the form of a standardized test. The students in the non-PLC schools underwent the same pre-test/post-test procedure. The scores of the two groups on both evaluations were collected in November 2018 and again in June 2019. The French test evaluated the students' abilities in reading comprehension, spelling, grammar, and verb conjugation, while the math test evaluated such notions as numbers and calculations, measurements and quantities, spatial geometry, and statistics.

Constructing the Standardized Test
The construction of the standardized test was achieved in seven steps: planning the test's objective and anticipated usage; determining what was to be measured; creating and formatting the items; evaluating their level of difficulty; determining the number of items to be developed; and finally, analysis and validation of the items (Laveault & Grégoire, 2014). The goal being the evaluation of the effects of the PLC model in the school setting, this test was a means to compare the scores of the different student groups to assess the possible effects of the PLC through the introduction of new pedagogical practices.
The test items were designed to consider the learning objectives of the official education programs established for each level. Each item thus had to state a problem as well as provide instructions on the action to be taken, response elements, and scoring rules. Multiple-choice, short-answer questions required that the students provide a word, a phrase, or a number, with only one possible answer. Ultimately, a group of items having an average degree of difficulty was retained, as the goal was not to discriminate the students but rather determine the presence or absence of an impact of the PLC on their academic results.
As for the number of items, the intent was 20 items in French and 20 in mathematics. For reliability and validity purposes, we began with a test containing 40 items in French and 40 in math to ultimately retain the 20 items displaying the greatest metric qualities.
The test items were analyzed and validated in two steps: First, a qualitative analysis was performed by teachers and pedagogical supervisors who evaluated the conformity of the items with regard to the learning objectives defined in the educational programs. These "experts" also provided input on, among others, item formulation, language level with respect to the population under study, relevance of the answers to the proposed items, absence of ambiguity in the vocabulary used, grouping of the items within the test, and relevance of each item to minimize test length. Their invaluable comments and feedback thus enabled us to adjust the items and validate the test's format.
Thereafter, a quantitative analysis was conducted with the help of SPSS 20.0 software. This analysis of the items helped us choose the most-suited ones among an initial ensemble that was larger than necessary. The test was administered to a student sample and the metric qualities of each item were then evaluated. Because the test was initially constructed and tested with 40 items, those displaying a negative correlation or strong multicollinearity in the entire sample were deleted from the final test version. We also performed a differential item functioning (DIF) analysis to avoid favoring one group over another.

Test Standardization
Because both groups of students were evaluated on the same educational program and the tests were identical, as were the conditions under which the tests were administered (length, instructions on how to answer the questions), standardization of the test was successful. Furthermore, a single correction grid was employed, including a list of responses and the scores to be given for right or wrong answers. The standardization process ies.ccsenet.org International Education Studies Vol. 13, No. 6; thus enabled us to compare the scores obtained by the two groups. However, we did not standardize the test scores, thus the raw scores were used in the subsequent analyses.

Data Analysis
The data were analyzed using SAS 9.4 software. Because schools constituted the analysis unit in our study, a mixed linear model was used to consider the repeated observations of the students' outcomes on the two tests, the hierarchical nature of these data, and the possible dependence between students in the same class or the same school. The fixed effects which were modeled and compared were the "group-related" effects (experimental and control), the "time-related" effects (pre-and post-test), the interaction between the group and time effects, and the students' age and gender.
The random effects considered in the model to reflect the hierarchical structure of the experiment but not taken into account in the comparison were the student effect, the classroom effect, and the interaction between the preand post-test.

Results
We examined schools functioning as PLCs and the effects on their students' outcomes in two core subjects, namely, French and mathematics. The descriptive analyses of the test scores of both groups of students indicate that at the beginning of the school year prior to the experiment, the results were comparable, while the post-test results were superior. This improvement was slightly more noticeable in the experimental group between the pre-test (M = 8.94; SD = 1.97) and post-test (M = 10.08; SD = 1.98) than in the control group (pre-test M = 8.91; SD = 1.90 and post-test M = 9.87; SD = 1.93).
In math, the results show that both groups were at different levels at the beginning of the year but that both groups improved between the pre-test and the post-test. The pre-test results of the experimental group were found to be relatively higher (M =9.59; SD = 1.74) than were those of the control group (M = 9.05; SD = 1.49), while during the post-test, the control group's scores (M = 10.47; SD = 1.73) were similar to those of the experimental group (M = 10.32; SD = 1.90).
Mixed linear analyses were then performed to compare the "group" effect (control and experimental), the "time" effect (pre-and post-test), the interaction between the "group" and "time" effects, and finally, the students' age and gender.
As for the students' outcomes in French, the results of the fixed effects of the covariables "age" and "gender" in Table 4 show p values greater than 0.05 (p = 0.21 and p = 0.49, respectively), which indicates that age and gender had no effect on the students' outcomes. The group-time interaction reveals a significant effect (p = 0.04). Indeed, with a fixed effect of the "group" variable and by comparing the effect of the "time" variable in the experimental group, the tests of effect slices results show a significant effect (p < .0001) of the "time" variable, as a notable difference was observed between the pre-test scores and the post-test scores of the experimental group. Similarly, in the control group, our findings show a significant effect (p < .0001) by the "time" variable, with a significant difference observed between the pre-and post-test scores.
However, when the effect of the "time" variable was fixed and the effect of the "group" variable was compared, the results indicate no significant effect (p = 0.78) on the pre-test and likewise (p = 0.11) on the post-test; therefore, no significant difference between the two groups at the beginning and the end of the year in French. This indicates that both groups were at almost the same level in French and that their marks had improved almost identically by year's end. Table 5 summarizes the effects of the "group" and "time" variables in French.  In general, our findings show that the students were all at the same level at the beginning of the year on the French pre-test. While the outcomes of both the experimental group and the control group improved significantly, this improvement came about differently, as is evidenced by the notable effect of the interaction between the "group" and "time" variables (p = 0.04). The outcomes of both groups improved significantly on the post-test, compared to the pre-test results, yet were similar at the end of the year, as shown in Figure 1.

Figure 1. Temporal variation of students' results in French
An ANOVA analysis of the differences in variation in French shows that the difference in improvement between the two groups under study is evidenced by the classroom (p = 0.012) and the student (p < .0001) but not the school (p = 0.1).
In math, student age and gender also had no influence on the students' scores, as the fixed effects of the "gender" (p = 0.17) and "age" (p = 0.08) variables were not significant beyond 0.05 (see Table 6). The interaction between the "group" and "time" variables reveal a notable effect (p < .0001), signifying that both groups improved yet in a different manner. This observation was confirmed when examining the averages of the two groups between the pre-test and the post-test.  Vol. 13, No. 6; When the effect of the "group" variable was fixed and the effect of the "time" variable was compared, a significant effect for the experimental group (p < .0001) and the control group (p < .0001) was observed, with a noticeable difference between the pre-test and post-test results for both groups.
When the "time" variable was fixed and the effect of the "group" variable was compared for the two test periods, the tests of effect slices (see Table 7) revealed a significant effect at the pre-test (p = 0.0061). Indeed, our findings show a notable difference in the pre-test outcomes in both groups, which indicates that the two groups were not at the same knowledge level in math at the beginning of the year. However, the post-test outcomes of both groups show no significant effect (p = 0.45), as no significant difference was observed between the two groups at this final test period. This also indicates that at the end of the year, the results of both test groups were very similar. The results of both groups of students in math were not comparable at the beginning of the experiment because the experimental group was superior to the control group in terms of performance. During the experiment, the outcomes of both groups improved in their own way and were at almost the same level by the end of the study. This suggests that the control group improved more than the experimental group did in math during the year, as is shown in Figure 2, where a difference of variation is observed between the pre-test and post-test results for both groups in this subject.

Figure 2. Temporal variation of students' results in math
As with the marks in French, we performed an ANOVA analysis of the differences in variation of the marks of both groups in math. Our findings show that the difference in improvement observed between the experimental and the control groups was more due to the classroom (p = 0.0007) and to the student (p < .0001) than it was to the school (p = 0.1). ies.ccsenet.org International Vol. 13, No. 6; The various analyses performed show the following notable results relative to the variation in the students' outcomes at the beginning and the end of the school year: On the French test at the beginning of the year, the outcomes of both groups were similar, with no difference observed. The students were thus at the same level in terms of knowledge acquisition. At the end of the year, after the schools in the experimental group had worked as PLCs, the results of this group improved and were at the same level as those of the control group, with no notable difference. There was therefore no discernible effect of the experimental group in terms of knowledge acquisition in French.
In mathematics, the performance level of the two groups of students at the beginning of the year was not the same, as the experimental group's results were superior to those of the control group. At the end of our study, however, no difference was observed between the outcomes of the two groups, which had improved and were at the same level. This indicates that the control group performed far better than the experimental group did, which is supported by the absence of noticeable effect of the PLC schools (experimental group) on knowledge acquisition in math.

Discussion
The standard knowledge acquisition test results show that both groups of students improved in French and were at the same level at the beginning and at the end of the year. This improvement was also observed in mathematics, which suggests that the variation occurring during the school year was the result of an accumulation of knowledge, characteristic of the teaching/learning process. This situation also showed no observable effect of the experimental group's schools working as PLCs on student learning which could have been observed at the end of the study.
Schools functioning as PLCs for one year are referred to as being in the initiation stage; at this point, the impact of new practices introduced within the PLC to monitor student learning is not yet palpable, as indicated by several authors on the subject (Huffman & Hipp, 2003;Leclerc, Moreau, & Lépine, 2009). This impact becomes more tangible at the end of the second year of operation, when the PLC enters the implementation stage (Peiying & Wang, 2015). Therefore, in the case of the PLC schools in this study, at such an early stage, the effects are not yet noticeable.
We strongly recommend that more schools adopt the PLC approach, which ensures teacher professional development by their peers through collaborative initiatives and activities during which teachers can discuss their pedagogical practices, share their winning strategies and successful experiences, and collectively examine the impact and the effects of these practices on how well their students are learning.
Another non-quantifiable aspect observed during this study was the interest generated and greater awareness instilled in the teachers regarding their students' lack of achievement. Indeed, this concern was omnipresent throughout the PLC collaborative meetings during which the teachers came to hold their schools and themselves accountable for their students' failures in a context where accountability policies and standards do not exist. This realization on the part of the teachers may become the deciding factor that ignites a greater level of commitment to the academic success of the students through improved teaching practices.
An analysis comparing the learning levels of the two groups after 2 and 3 years of operation of the PLC will enable researchers and educators to better monitor and evaluate the effects of this educational approach on student learning.

Limitations and Future Considerations
One of the limitations of this comparative study was the number of students in the control group, which was less than that of the schools in the experimental group. This normally implies that the standard error of the control group would be much higher than that of the experimental group in terms of the results obtained, as the presence of a few students with high or low scores would produce a variation in the group's average when the number of participants in this group was not as high. In this perspective, the same number of students in the two groups would have enabled us to obtain a better comparison of the data on the progress achieved by the students within the group and over time.
Further research should take into account the different characteristics of the PLC when evaluating its effects on student learning, such as shared leadership, common vision and values, teamwork, commitment to progress, the adoption of a results-based approach, and the availability of material and financial resources, among others (Hord, 1997;DuFour & Eaker, 2004). Considering these inherent principles of effective PLCs would thus make it possible to identify which characteristics have a positive influence on student achievement.
This pilot study provides knowledge on the feasibility of this educational model on a larger scale in the context of Cameroon, where the PLC approach in the school setting does not exist, and calls upon us to consider and investigate the different aspects of this work method and how it can improve teaching practices and student learning.