Preparing Public Administration Scholars for Qualitative Inquiry: A Status Report

This paper reiterates in succinct form the discipline’s discussion of how qualitative inquiry stands to contribute substantively to public affairs knowledge and why it is therefore a necessary skill to develop as scholars in the field and its cognates; public administration, public policy, and political science. It then presents empirical data on doctoral research methods requirements collected in 2004-2005 and 2011-2012 to demonstrate that while preparation in qualitative inquiry is improving, at least one-third of the doctoral programs associated with NASPAA-accredited Master of Public Administration degree programs still fail to prepare candidates in qualitative inquiry. Therefore, the discussion is still relevant and doctoral programs are encouraged to continue curricular reform and enhancement. Furthermore, scholars are encouraged to continue research on research in public administration.


Research in Public Administration
There are two commonly cited challenges to identifying public administration as a scientific discipline: (1) it is not distinct from other disciplines; and (2) it lacks the methodological rigor and quality to build knowledge.The second argument supports the first and includes criticism of the substantive research focus of public administration, the purpose of inquiry, and the quality of methodological execution.Virtually all points made in the literature were corroborated by comments from program directors, demonstrating that they are widely held views (Stout & Morton, 2005).Such quotes are not included for the sake of brevity and to avoid redundancy.The following sections will follow the issues noted in Figure 1.

Is Public Administration a Discipline?
In the social sciences, a discipline is generally considered to be a forum for scientific inquiry that differs from that of others.That is, the theories used, the issues studied, and the core knowledge bases are relatively discrete from discipline to discipline (Adams, 1992;Neumann, 1996).However, the study of public administration displays both multidisciplinarity and interdisciplinarity: the former results from a shared focus on a topic of research, while the latter occurs through an exchange of theoretical insights (Raadschelders, 1999(Raadschelders, , 2005a)).Nevertheless, unique issues are of concern to public administration as opposed to its cognate fields, and researchers seek answers independent from other disciplines (Stallings & Ferris, 1988).For some, this indicates that it may be fine to be perceived as an enterprise rather than as a discipline, so long as the important questions are being asked and answered well (Waldo, 1984).These nuances should not detract from conducting social science focused on the theory and practice of public administration (Frederickson).

Quality of Public Administration Research
Given the overwhelming desire to be perceived as a legitimate field of study, if not a discipline per se, it is predictable that the topic of research quality would be widely discussed (White & Adams, 1994b).A resounding theme is that public administration research lacks methodological rigor because it makes insufficient use of the positivist, explanatory social science methodology (Houston & Delevan, 1990;McCurdy & Cleary, 1984;Perry & Kraemer, 1986;Stallings & Ferris, 1988;White, 1986b;White & Adams, 1994a).However, those who promote interpretive and critical methodologies also question the quality of research (Catron & Harmon, 1981;Denhardt, 1984;Hummel, 1977;Perry & Kraemer, 1986;Thayer, 1984;White, 1986b).Therefore, assuming that explanatory, interpretive, and critical approaches all constitute valid research purposes, there is agreement that the quality of public administration research must improve across a number of dimensions.Of initial concern is the substantive focus of study.Second, the purpose of research conducted is questioned.Third, critiques of methodological execution are given.

Quality of Substantive Focus
Measures of importance, relevance, and criticality in the assessments made in the past several decades suggest public administration research is not asking the right questions (Adams & White, 1994;Cleary, 1992Cleary, , 2000;;McCurdy & Cleary, 1984).Big questions are "problems with immediate and great significance" (Lieberson, 1992, p. 12).There are two ongoing arguments in regard to the big questions of public administration: (1) which questions should be pursued; and (2) whether the questions meet the needs of scholars, practitioners, or both.In each thread, there are competing recommendations for action.Many prescriptions are made for which questions public administration research should address (Cleary, 1992(Cleary, , 2000;;McCurdy & Cleary, 1984;NASPAA, 1987;Perry & Kraemer, 1986;Stallings & Ferris, 1988;Streib, Slotkin, & Rivera, 2001).For example, a host of topical "big questions" in public administration have been put forward (Agranoff & McGuire, 2001;Behn, 1995;Brooks, 2002;Callahan, 2001;Cooper, 2004;Kirlin, 1996Kirlin, , 2001;;Neumann, 1996;Rohr, 2004).Two fairly comprehensive analyses were offered by Lan & Anders (2000) and Raadschelders (1999), describing empirical concerns and theoretical perspectives that have been used in public administration research.Recurring themes include: (1) the distinctions between what is public and private in administration or the notion of "publicness"; and (2) the relationship between politics and administration, or public administration and society.Another more general approach to ensure relevance is to identify specific levels of analysis, including individual, organizational, and societal.While societal levels may be of greater interest theoretically, individual and organizational levels tend to be of greater interest to practitioners.

Quality of Purpose
Some critiques suggest that public administration does not produce enough basic research as opposed to applied or descriptive research.Applied research considers practical issues of what and how.As noted by Raadschelders (1999), these questions typically require deductive research approaches.Studies focus on explanation and prescription, using knowledge to predict and control the world and its inhabitants.Some questions call for more theoretical and basic research on who and why, which are most typically answered through inductive research.These questions seek to identify patterns or to understand social phenomena through exploratory inquiry.They include deeply normative questions that call for combinations of descriptive, interpretive, and critical scientific methods.Of course, the division between the different types of questions and their associated research approaches is not hard and fast.For example, large sample regression analysis often contributes to explanations of why social phenomena occur.However, the difference in basic purpose leads to the next quality concern.
The purpose of both quantitative and qualitative research is found to be lacking by most accounts (Adams & White, 1994;Brower, Abolafia, & Carr, 2000;Houston & Delevan, 1990;McCurdy & Cleary, 1984;Perry & Kraemer, 1986;Stallings & Ferris, 1988;White, 1986b;White & Adams, 1994a;White, Adams, & Forrester, 1996).First, calls are made to increase basic research and theoretically grounded applied research to avoid the trap of what many refer to as "practice research".What this means is that purely descriptive studies, while perhaps of interest to practitioners, do not add sufficient value to the field of study.Case studies must be theoretically grounded and analytical in nature to simultaneously address problems of practice while building disciplinary knowledge (Orosz, 1997;Raadschelders, 1999Raadschelders, , 2011)).In the pursuit of this research purpose, there is an apparent openness to methodological pluralism with the caveat that the quality of methodological execution improves (Cleary, 1992(Cleary, , 2000;;McCurdy & Cleary, 1984;Perry & Kraemer, 1986;Riccucci, 2010).Yet it must be noted that some scholars cling fiercely to strict positivist epistemologies and the mainstream social science paradigm that demands explanation and prediction as opposed to understanding (Stallings, 1986).
In either case, public administration research must better articulate a clear purpose of explanation or understanding (Adams & White, 1994;Lowery & Evans, 2004;Neumann, 1996;Raadschelders, 1999).Such disclosure will help forge linkages between empirical research on issues of practice (applied) and underlying theoretical foundations (basic) so that both practical and scientific knowledge are enhanced and a clearer relationship between inductive and deductive approaches is forged.

Quality of Methodological Execution
Even when important questions are being asked in a way that serves the purpose of growing knowledge in the field as well as practice, there are many critiques of methodological execution (Cleary, 1992(Cleary, , 2000;;Houston & Delevan, 1990;McCurdy & Cleary, 1984; J. L. Perry & Kraemer, 1986;Stallings, 1986;Stallings & Ferris, 1988;White, 1986a;Wright, Manigault, & Black, 2004).Table 1 shows the criteria used to judge research quality in studies of public administration research by scholars and students (Adams & White, 1994;Bailey, 1992;Cleary, 1992Cleary, , 2000;;McCurdy & Cleary, 1984;White, 1986a;Yin, 1994).These criteria are fitted to mainstream social science research in the deductive, positivist, explanatory tradition even when most research in public administration is inductive and qualitative in nature (e.g.case studies) (Houston & Delevan, 1990;White, 1986b).Indeed, the pessimistic view on quality has been attributed to "inappropriate assumptions about what is acceptable as research in public administration" (Box, 1994, p. 76).In essence, interpretive and critical methods will never meet deductive, positivist criteria, because they are inappropriate measures for assessment (Jensen & Rodgers, 2001).
To make such an assessment fairly, we must use criteria that are broad enough to accommodate diverse mixes of paradigmatic components, while achieving the goal of establishing the trustworthiness of research results.There are two basic approaches to the issue of evaluative criteria for interpretive research.Some scholars recommend using the mainstream standards of validity and reliability (Cresswell, 1998).Others recommend similar but different standards, such as credibility versus validity and dependability versus reliability (Guba & Lincoln, 1998).However, many others have weighed in on this point, offering up a great variety of concepts as possible standards (Lowery & Evans, 2004), with the two most common being rigorous and systematic.These concepts are defined differently in the positive and interpretive paradigms.
The term rigorous has become synonymous with mathematical and statistical forms of analysis, relegating an inductive approach to a non-rigorous standing by default (Bevir, 2004;Schwartz-Shea, 2004).Furthermore, this interpretation of the term assumes that relevance for scientific knowledge is achieved by following the deductive scientific method (Dodge, Ospina, & Foldy, 2005).However, these interpretations put empirical research at risk of abstraction to the point where theoretical reflection is lost and results lose all practical relevance or are merely spurious in nature (Adams & White, 1994;Behn, 1995;Cleary, 2000;Kelly & Maynard-Moody, 1993;Mills, 1959;Strauch, 1976).Therefore, while procedural rigor may meet the expectations of positivist science, both theory-building and practitioners require more.Interpretive and other qualitative methods have a more robust concept of rigor in that it must be procedural as well as philosophical (Bevir, 2004;Dodge et al., 2005;Guba & Lincoln, 2005;Yanow, 2004).Procedurally, the researcher must clearly explain how he or she obtains, analyzes, and reports data.To avoid positivist misinterpretations, this is often called a systematic approach.This criterion helps evaluators differentiate systematic exploration of an issue from casual observation.A systematic approach need not be the inflexible, step-wise, deductive method (Behn, 1995;Yanow, 2004).Interpretive research is systematic in its application of the logic of induction and its iterative response to the research context and subject, as in the production of grounded theory (Glaser & Strauss, 1967).& Guba, 1985) Authenticity (Brower et al., 2000;Miles & Huberman, 1994) Trustworthiness (Lincoln & Guba, 1985) Veracity (Atkinson, Coffey, & Delamont, 2003) Applicability

Consistency Reliability
Transparency (Erlandson et al., 1993;Lincoln & Guba, 1985) Dependability (Erlandson et al., 1993;Lincoln & Guba, 1985) Auditability (Miles & Huberman, 1994) Neutrality Objectivity Triangulation (Cresswell, 1998;Miles & Huberman, 1994) Confirmability (Erlandson et al., 1993;Lincoln & Guba, 1985) Plausibility (Brower et al., 2000) Testing out with others (Moustakas, 1994) Intersubjective agreement (Yanow, 2004) Critique Criticality (Brower et al., 2000) As shown in Table 2, there are criteria to ensure qualitative rigor comparable to those of quantitative rigor For philosophical rigor, researchers must show that their work sheds light on something of practical and theoretical importance in a manner that is reasonable.For this reason, White & Adams (1994a) emphasize that interpretive and critical research should be assessed in terms of its value to the practice of public administration.
Evaluative criteria such as practical relevance, importance, utilization, application, and action take on a much greater importance in assessments of quality.
With these understandings of rigor as systematic approach and theoretical relevance, we can develop an appropriate set of evaluative criteria for public administration.This discussion may still be "in its infancy for public administration, but is well advanced by researchers in other disciplines and practice fields" (Orosz, 1997, p. 8), as shown in Table 2.However, it must be noted that even when using appropriate criteria, the assessments of interpretive and critical research are still quite negative across the board (Adams & White, 1994;Brower et al., 2000;Daneke, 1990;Lowery & Evans, 2004;Luton, 2010;White, 1986a;White et al., 1996).Program directors attribute poor quality to the lack of faculty training in both quantitative and qualitative methodologies (Stout & Morton, 2005).Therefore, it is reasonable to assert that pedagogical improvements are necessary in both paradigms.From the literature reviewed, it is apparent that a more widespread understanding is needed on what these methodologies actually entail.

Methodological Pluralism
Agreement regarding substantive focus, research purpose, and the quality of methodological execution hinges on concurrence that multiple methodologies should be used in public administration research.Methodology guides the conduct of research (J.L. Perry & Kraemer, 1986).In large part, methodologies are linked to either the inductive logic of discovery or the deductive logic of proof.The two logics are mutually interdependent and independently insufficient for the best cumulative results.In fact, they have been said to form the "yin and yang of inquiry"-a cohesive whole from which scientific knowledge is generated and tested (R. Perry, 2002).
Research methods are techniques used within a particular methodology.Qualitative research methods most often use the logic of discovery, wherein inductive reasoning is used to build understanding of empirical experience.
On the other hand, quantitative research methods most often use the logic of proof, wherein deductive reasoning is used to test hypotheses against empirical evidence.Thus, the two sets of logic are often described as different paradigms (Kuhn, 1996), either as inductive/deductive or qualitative/quantitative (Denzin & Lincoln, 2005a), thereby meshing methodology and methods (Riccucci, 2010).
A paradigm or research gestalt is made up of ontological and epistemological assumptions, theoretical commitments, research goals, and research practices (Schwartz-Shea, 2004).One way to paradigmatic definition is to categorize these elements into coherent sets that compose ideal-types (Weber, 1994) of inquiry.Much of the literature links quantitative and qualitative methods to associated research paradigms (Agnew & Pyke, 1994;Barker, 2000;Brady, 2004;Bray, Lee, Smith, & Yorks, 2000;Comte, 1976;Denzin & Lincoln, 2005a, 2005b;Glaser & Strauss, 1967;Gouldner, 1964;Hummel, 1991;Kelly & Maynard-Moody, 1993;Kuhn, 1996;Orosz, 1997Orosz, , 1998;;Platt, 1964;Popper, 1972;Schwartz-Shea, 2004;Trochim, 2004;White, 1992White, , 1994;;White & Adams, 1994a;Yanow, 1999Yanow, , 2004)).Following this approach, Brower et al. (2000) provide a useful taxonomy that summarizes typical differences between quantitative and qualitative research, as shown in Table 3.As shown in Table 3, there are fundamental differences between quantitative and qualitative methodology However, this approach may be inherently flawed.It is quite possible to use qualitative methods such as semi-structured interviews in a deductive inquiry.Indeed, many qualitative researchers are positivists and post positivists-not all are interpretivists.Similarly, many quantitative researchers do not confine themselves to the pure deductive paradigm of realism, objectivism, positivism, experiment, statistical analysis, and scientific reporting.For example, one may use quantitative methods like written surveys and Q factor analysis inductively.Therefore, rather than starting with methods, this discussion will follow Raadschelder's (1999) lead and start with the two basic logics of inquiry: inductive discovery and deductive proof.Attached to these logics are generally coherent ontologies, epistemologies, theories, specific methodologies, methods, and research products (Raadschelders, 2011;Riccucci, 2010).But with no hard-and-fast rules, the best that can be done is to discuss general characteristics and outline the full complexity of these choices.
Table 4 provides a fairly inclusive list of methodological possibilities gleaned from the literature reviewed.From these options, there is no one agreed-upon set that describes the sphere of "qualitative research"-simplistic labels no longer apply (Lincoln & Denzin, 2005).Instead, researchers tend to mix and match in designing research methodologies to address questions from a variety of perspectives and paradigms based on the research question at hand (Bailey, 1992;Lieberson, 1992;Trochim, 2004).What and how questions tend to be deductive, whereas who and why tend to be inductive.The two types will likely differ in favored ontological and epistemological approaches (Raadschelders, 1999(Raadschelders, , 2011)).Therefore, it is best to focus on methodology as opposed to methods-it may be that it is not so much the methods we disagree on, but rather our ontological and epistemological assumptions (Denzin & Lincoln, 2005a).Therefore, regardless of paradigmatic differences, ontological, epistemological, and theoretical assumptions should be disclosed by researchers in explication of their methodology (Bailey, 1992;Orosz, 1997).
In the face of such complex choices, some scholars retrench into privileging the deductive, quantitative approach to inquiry (Brady, 2004;Council, 2002;Denzin & Lincoln, 2005b;Lincoln & Cannella, 2004).However, the methodological or epistemic pluralism movement suggests that there is an important role for both deductive/quantitative and inductive/qualitative approaches to research.The basic assertion is that multiple kinds of knowledge are needed for theoretical advancement and to address complex social problems successfully (Alker, 2001(Alker, , 2004;;Brady, 2004;Farmer, 2008Farmer, , 2010;;Schwartz-Shea, 2001, 2002).For example, some public administration scholars argue that there are legitimate sources of knowledge that are not technical or scientific in nature (Franklin & Ebdon, 2005;Hummel, 1991;Schmidt, 1993;Schon, 1991).These forms of knowledge are based on practical reason that bridges the natural sciences and the humanities by considering the tacit knowledge of political, normative, moral, and aesthetic reasoning (Alker, 2004;White & Adams, 1994a).By including all forms of knowledge, social science becomes a pragmatic, artful craft (Brady, 2004;Hodgkinson, Herriot, & Anderson, 2001;Mills, 1959).Daneke (1990) compares this to the phenomena of wave and particle theory in physics-each is insufficient unless combined with the other.As deductive reasoning relies on inductive reasoning to formulate hypotheses that can then be tested, there is an ongoing cyclic relationship in which one phase leads into the other.Which approach is "better" or where inductive ends and deductive begins becomes a "chicken and egg" debate.Perhaps it is time to think of science as the whole nest-chicken and eggs included.In this light, methodological pluralism becomes integration.Indeed, some researchers actually combine methods in a given study (Orosz, 1997).For some, the use of mixed methods or research logics is valuable to research flexibility and responsiveness, as well as triangulation for verification, corroboration, and richness of understanding (Babbie, 1983;Bevir, 2004;M. B. Brewer & Collins, 1981;Brower et al., 2000;Cresswell, 1998;Denzin & Lincoln, 2005a;Flick, 2002;Kelly & Maynard-Moody, 1993;Lieberson, 1992;Orosz, 1997;Platt, 1964;Schmidt, 1993;Schwartz-Shea, 2001;White, 1986b;White & Adams, 1994a;Yanow, 2004).
By most accounts, the ontological bridge created by post-positivists makes this possible.To better understand what positivists understand as objective truth or fact and what interpretivists understand as socially constructed, culturally embedded, or relative truths, we may simply agree to temporarily fix a concept or empirical observation, with the caveat of its contextual (e.g.time and place) or arbitrary (e.g.researcher selected) nature (Alexander, 1991;Barker, 2000).However, there are different interpretations of the term empirical.Much of the literature reviewed leaps to the conclusion that the term connotes positivism as well as quantitative research methods, even by some promoting the use of qualitative research methods (Bailey, 1992;Houston & Delevan, 1990; J. L. Perry & Kraemer, 1986).But it is more accurate to define empirical research as including all forms of observation and experience (Lowery & Evans, 2004;Schwartz-Shea, 2001;Yin, 1994).In other words, it is research based on actual phenomena as opposed to abstract concepts.Epistemologically, all empirical research methods look to the observable world to inform the abstract world, to one degree or another (Hanson, 1958;Mouly, 1978;Schrag, 1967;Trochim, 2004).Therefore, the term can apply to both research paradigms.

Critique
As shown in Table 4, there are myriad possible methodological possibilities

Qualitative Research in Public Administration
The discussion of qualitative research in public administration is contained within several different conversation threads.First, there is the notion of methodological pluralism and how it fits within the field's knowledge growth.Second, there is a discussion of the fit between important questions in public administration and qualitative approaches.Third, there are assertions that qualitative research is better suited to the needs of practitioners-the "end-users" of scholarly knowledge.
In sum, this state of affairs suggests that the field is open to methodological pluralism and the qualitative methods it entails.However, quality is still of concern, and methodological improvements based on appropriate evaluative criteria and guidelines will be welcomed by the scholarly community (Cleary, 2000;Hunter, Schmidt, & Jackson, 1982;Jensen & Rodgers, 2001;Lieberson, 1992;Perry & Kraemer, 1986).Scholars are also concerned that methodology fit the research question rather than imposed dictates (Raadschelders, 2005b(Raadschelders, , 2011;;Riccucci, 2010).From the big questions discussion, it is clear that there are research questions that will be most appropriately answered through inductive, qualitative approaches to research, including both interpretive and critical paradigms (Lowery & Evans, 2004;Luton, 2010;Stallings & Ferris, 1988;White & Adams, 1994a, 1994b).Overall, qualitative approaches are: (1) well suited to conceptual and normative questions; (2) appropriate for pragmatic research; (3) intended to find lived meaning in local contexts; and (4) effective in exploration, description, interpretation, and explanation (Luton, 2010).
This methodological fit also translates into value for practitioners who might prefer qualitative research (Box, 1992;Hummel, 1991;Lincoln & Cannella, 2004).Administrators must often make decisions based on what is believed to be true, right, or good.This requires critical thinking, rather than simple deductive or inductive explanatory logic (White, 1986b).Furthermore, qualitative research results are often more accessible and understandable to practitioners (Luton, 2010).Methods such as interviewing, narrative analysis, ethnographic participant observation, and case study mirror skills that administrators use in practice and the results are presented in a manner to which they can relate.Furthermore, these types of qualitative methods are often used in action research and applied research that are geared toward changes in practice that are relevant to practitioners (Luton, 2010).In sum, qualitative research methods are effective in "cultivating authentic connectedness between academics and practitioners" (Dodge et al., 2005, p. 286).

Doctoral Curriculum
Assuming that the field of public administration largely accepts the value of methodological pluralism and its attendant qualitative research methods, we must turn to the issue of preparing scholars to conduct such inquiry.Lowery and Evans (2004) recommend that curricula in both master's and doctoral programs need to be expanded to include a broader range of research strategies and tools.In regard to qualitative research methodologies in particular, Perry & Kraemer believe "…public administration scholars need to become both more proficient practitioners of this craft and contribute to the advancement of these methods" (1994, p. 106).For example, contemporary understandings of public policy as interpretive and symbolic processes (Schneider & Ingram, 1997;Stone, 2002;Yanow, 1995) conflict with traditional positivist approaches to analysis of objective facts and organizational structures.Therefore, to have sufficient capacity to analyze policy from these post-positivist perspectives, new skills are needed (Kelly & Maynard-Moody, 1993).But to become proficient in these skills, scholars need better training in conducting qualitative research (Orosz, 1998).
On the one hand, training can occur post-doctorate.For the benefit of existing faculty members who would like to expand their capacity as teachers and mentors in qualitative research methods, some continuing education opportunities are available (CQRM, 2010).On the other hand, for sustained methodological pluralism, future generations of scholars are most readily reached through their doctoral programs.In fact, the traditional objectives of doctoral education are the reproduction of the professoriate and the preparation of researchers (Adams & White, 1995;Felbinger, Holzer, & White, 1999;Hambrick, 1997;McCurdy & Cleary, 1984;Ross, 1995;White & Adams, 1994a;White et al., 1996).Thus, this discussion focuses on programs that maintain a goal of preparing scholars for both research and teaching roles.
Within doctoral programs, there are two main components that impact the preparation of students: (1) faculty mentoring and course design; and (2) program requirements.Both issues have been blamed for poor quality research (McCurdy & Cleary, 1984;White et al., 1996).Mentoring is noted as particularly important to the improvement of qualitative research methods.The dissertation committee experience should prepare scholars for carrying out their future research agendas and publishing activities, including both diverse topics and methodological approaches (Jensen & Rodgers, 2001;Orosz, 1997;White, 1986a).However, faculty typically offer mentoring based on their own training and research practices rather than on what the candidate needs.As has been humorously noted, "by some magical process, it usually turns out that what we do is better than what we do not do" (Lieberson, 1992, p. 2).Because positivist, behavioral research is most likely to be published, it is most likely what is done, perpetuating somewhat of an "iron cage" (Lowery & Evans, 2004).However, this tide appears to be turning based on the research findings and trends noted from 2004-2005 to 2011-2012.
Doctoral curriculum design is guided by many factors, including what peer institutions are doing, particularly those which are aspirational.While considered a cognate field to public affairs, as noted in an in-depth study of research methodology curricula in 57 political science doctoral programs: What is particularly useful about doctoral program requirements and offerings is that they reflect the collective decisions of faculty "on the ground," that is, curricular requirements and offerings provide the structural parameters within which, on a daily basis, individual faculty work with and train those who will ultimately replace them in the discipline… doctoral requirements are a faculty's collective, formal enactment of its vision for the discipline's future.(Schwartz-Shea, 2001, pp. 3-4) These curricular visions are also revealed in the textbooks used and the syllabi designed.In aggregate, Schwartz-Shea (2001) found a significant proportion of political science doctoral students received no formal training in qualitative research methodology, let alone specific methods and techniques available.It would appear from the empirical evidence found in public administration reported herein that our field has made improvements, but still has room for growth.

Method
Following the logic that accreditation by the National Association of Schools of Public Affairs and Administration (NASPAA) is widely considered the gold standard of public affairs, doctoral programs affiliated with accredited master's level public administration (MPA) programs are a reasonably inclusive and desirable group to study.In fact, surveys have been made of these doctoral programs in the past.Brewer, et al. (1998) found that most programs had a strong research requirement, yet their interviewees disagree, saying that they were trying to improve the quantity and quality of research, in addition to indicating that their questionnaire was too narrowly focused on mainstream social science research and methods.In another review of seventy doctoral program catalogs, the vast majority were found to offer only one research course that was intended to cover all philosophy of science and research methods needs (Felbinger et al., 1999).

Study Population and Sample
According to NASPAA data, there are 58 doctoral programs affiliated with accredited MPA programs as shown in Appendix A (NASPAA, 2011).The original October, 2004 data set included 49 institutions classified by Carnegie as Research Extensive or Research Intensive institutions.However, as pointed out during presentation of the results in 2005, this classification scheme excluded important doctoral programs.Therefore, this update includes all doctoral programs affiliated with NASPAA-accredited MPA programs as of September, 2011.
Accommodating institutional changes over the intervening years, a total of 45 universities are in both data sets.This is the sample used for trend comparisons.

Study Methodology
While the original study included syllabi content analysis and semi-structured telephone interviews with doctoral program directors (Stout & Morton, 2005), this update is based solely on content analysis of program descriptions.This decision was based on the assumption that should proportions of programs requiring and offering qualitative research methods remain low, the reasons for that being the case have likely not changed in the intervening years.However, should dramatic differences be found in trends, then the more extensive interview process might be replicated to determine what has changed.The review of program information determined: (1) which programs require qualitative research methods for graduation; and (2) which programs offer a qualitative research methods course internally.Fortunately, online content has improved dramatically over the last seven years, so the update required little more than web site reviews.

Findings
Appendix A shows which universities in both samples require qualitative research methods and which offer a course within their program.In the original study some program directors reported allowing qualitative research methods to fulfill a portion of overall research methods requirements based on student interest, course availability, and approval of program of study committee.These are not counted in the requirement category, which means a mandatory requirement for degree completion.Only 18 percent of the programs mandated a qualitative research methods course for all students.This is juxtaposed to what appeared to be a nearly universal requirement of at least one quantitative methods course above and beyond a general methodology or philosophy of science course.Furthermore, 51percent offered at least one qualitative methods course internally within the department, while the remainder of the programs relied on external resources to meet either a research methods requirement or elective.
In the 2011 sample, over 62 percent of the programs now mandate a qualitative research methods course for all students and nearly 71 percent offer a course internally.This is a marked increase that could be attributed to the scholarly discussion in the literature reviewed.Changes from 2004-2005 to 2011-2012 show that of the 45 universities in both samples, 18 (40 percent) made no changes to qualitative research methods requirements or course offerings in their program.Seven programs no longer offer a course and of those five do not require one.However, seven programs added the requirement, five programs added a course, and eight programs added both the requirement and a course.This means that nearly 45 percent of programs have increased requirements for or offerings of qualitative research methods, while only 15 percent have decreased offerings.This is a substantive change over the seven year period.

Conclusions and Recommendations
A number of conclusions can be drawn from the literature review and the two empirical studies.First, the question of whether or not public administration is a discipline or a field of study as compared to others has not prevented those who have committed to the study of the practice from conducting valuable research.Public affairs are an area of inquiry that are both multidisciplinary and interdisciplinary in nature, and are rich in both theoretical and empirical issues of interest to researchers.The theoretical and empirical arenas of research are very diverse, and important questions range from the highly abstract to the very practical.Thus, research methodologies available to scholars need to be equally varied.Some questions will lend themselves well to deductive inquiry, while others will be better explored through inductive inquiry.Following, both quantitative and qualitative research methods are required to make the best fit to the question at hand.Use of both types of inquiry will help build knowledge in the field, connect theoretical and empirical research, and answer a rich array of big questions.Therefore, the field requires a broader, more methodologically inclusive definition of scientific legitimacy.Methodological pluralism will also help forge better bridges between applied research on what problems exist in the enterprise of public administration, and more basic research on why the problems occur, who is involved, and how to have an impact that is positive.
There is a strong preference for qualitative research in public administration, with the case study being a popular approach.This is quite common among practice fields and compelling arguments have been made for the validity and usefulness of case study, as well as other interpretive and critical approaches to naturalistic inquiry.However, the quality of research has been found lacking based on a diverse set of assessments and criteria.To move these research designs from mere description to analysis or diagnosis-to build theory, not just illustrate practice-qualitative research methods need improvement.Attentiveness to appropriate quality criteria is needed both on the part of individual studies and in terms of reassessments of the field's production.Great advancements in the study and practice of qualitative research methods have been achieved in the last several decades, to which public administration scholars may or may not have availed themselves.
Some of the most important sources of knowledge and experience in research methods for any scholar are coursework, research, and faculty mentoring during the doctoral program.It follows that if methodological pluralism is to be achieved in public administration, robust preparation in the philosophy of science, history of research in the field, and diverse research methodologies and methods must be available through courses and faculty mentors.One might even go further to suggest that a more expansive core course requirement may be necessary to ensure that such preparation is obtained.Similar to political science, "the field deserves students trained in both quantitative and 'qualitative' methodologies so that they are aware of the range of possible methods they should consider for their substantive research questions" (Schwartz-Shea, 2001, p. 32).This line of thinking brings the discussion to the final topics of relevance to qualitative research in public administration: research methods curriculum and pedagogy.
It is clear that the trend in public administration doctoral programs is to increase qualitative research methods requirements and course offerings.However, over a third of the sample still do not require training in qualitative research methods-there is still room for growth.However, curricular reform is a delicate subject at the doctoral level because it is the source of professional and disciplinary identities (Alker, 2004) The substantive change seen in qualitative research methods requirements and course offerings in public administration doctoral programs over the past seven years points to the need for a new round of "research on research" in public administration.While this agenda has been carried forward since the original study (see for example, Raadschelders, 2005aRaadschelders, , 2005bRaadschelders, , 2008Raadschelders, , 2011;;Raadschelders & Lee, 2011;Riccucci, 2010), replication of earlier assessment studies is also needed to back up claims such as, "If the study of public administration is lacking it is not in the quality of research being done, but in the lack of interaction between scholars who work from very different approaches" (Raadschelders, 2005b, p. 596).Such a research agenda should be sponsored by professional associations that have an interest in improving the quality of public administration research.
There also needs to be a replication of the various studies that have assessed the research being conducted by students, as evidenced in dissertations.But reliance on abstracts alone is questionable because they are not always written in a manner that fully reveals critical aspects about methodological design, often focusing more on purpose and findings (Adams & White, 1994;Cleary, 1992Cleary, , 2000;;White, 1986a).In the program director interviews, it was surprising to find that doctoral programs do not commonly track or analyze dissertations (Stout & Morton, 2005).Doctoral students want broader and more robust methodological preparation (Jordan, 2005).But if programs do not know the type of research their students are producing, how can they be responsive to their needs for mentoring and course offerings?Internal tracking would help ensure that abstracts include clear methodological explanation.Disciplinary adoption of a common format for tracking methodology so that there is a shared definition for qualitative, quantitative, and mixed-methods would also be beneficial.
Replication of quality assessment studies of both published research and dissertations should be completed utilizing appropriate criteria as suggested by the literature.A comprehensive but manageable set would include: authenticity, transferability, transparency, intersubjective agreement, relevance, and criticality.Authenticity ensures thick description, reflexivity, and integrity of reporting.Transferability ensures that an appropriate methodology has been used for the question at hand.Transparency ensures that a systematic approach has been used and adequately explicated.Intersubjective agreement demands evidence that the participants in the study support the findings.Relevance ensures that the substantive focus is of use to the field.Criticality ensures that even when the focus is on questions of how, the underlying assumptions will be explored and revealed.The specific manner in which these criteria are met will differ across approaches in regard to reporting styles and disclosure, and it should be incumbent upon the researcher to explicate how the project meets these five general criteria appropriately for the approach chosen.This requires both students and scholars to be more attentive to methodological disclosure.
The final recommendation is to attend to the related and very important question of designing research methodology pedagogy appropriate for those who seek to become practitioners as opposed to scholars.As noted in the introduction, this is the sixth strand in the scholarly conversation on public administration research that was not covered herein.While this issue is particularly important at the master's level of education (see for example, Horne, 2008), it is also of concern to those who plan to enter the workforce as policy analysts, program evaluators, and the like.Applied research is a particular subset of science-not all social science is applied social science.Therefore, in a practice-oriented field like public administration, we must be even more attentive to the particular methods being taught within the broader methodological paradigms (Gill & Meier, 2000).This would be a third valuable stream of updated research on research in public affairs.

Table 1 .
Criteria used to judge research quality in public administrationAs shown in Table1, the criteria used to judge research quality vary from scholar to scholar

Table 4
Felbinger et al. (1999)ply allow the field to continue to change organically in response to the new generation of scholars coming into leadership positions.As Meier notes, "Most new PhDs are now far better trained methodologically than I was" (2005, p. 664).But even these emerging scholars may need some guidance in redesigning curricula to meet the demands of methodological pluralism.Scholars have identified a typical course listing that would cover important theoretical foundations, performance applications, and research methodologies.For example,Felbinger et al. (1999)envision a research curriculum composed of three courses, at minimum: Philosophy of Social Science, Quantitative Methods, and Qualitative Methods.This is similar to Schwartz-Shea's (2001) recommendations for philosophy of science, scope and/or history of the discipline, quantitative research methods and statistics, and qualitative and interpretive research methods.In 2004-2005, 51 percent of the public administration doctoral programs offered a specific qualitative methods course; in 2011-2012, 71 percent now offer a course.