Time to Reconsider the Scientifically Based Research Requirement

Time to Reconsider the Scientifically Based Research Requirement

Nonpartisan Education Review / Essays: Volume 3, Number 6
Access this essay in .pdf format

Reauthorization of NCLB:

Time to Reconsider the Scientifically Based Research Requirement

Suzanne Franco

Wright State University

The federal initiative, NCLB, includes guidelines about educational research methodology as well as school practices ("No Child Left Behind Act," p. 532). The law states that reforms and school practices should be based on scientifically based research (SBR). SBR is mentioned over 100 times in NCLB (A. Smith, 2003, p. 126). Next to the strong emphasis on dis-aggregation of test scores, NCLB’s reference to SBR has spawned the next most frequent number of responses in the literature (Viadero, 2004). Educational researchers spend time “fighting these designs when they are inappropriate or irrelevant, which is often the case” (Eisenhart, 2005, p. 246).

            In response to the NCLB SBR mandate, the National Research Council (2002) published a report, Scientific Research in Education (SRE), addressing the question of the meaning of SBR. On the NCLB website, the U.S. Department of Education explains that “scientifically based research means there is reliable evidence that the program or practice works ” (n.d.). The explanation includes a reference to experimental study involving an experiment/control group. The report states that requiring SBR “moves the testing of educational practices toward the medical model used by scientists to assess the effectiveness of medications, therapies and the like” (A. Smith, 2003, p. 126).

            The strong emphasis on SBR leads one to the conclusion that forms of research that do not conform to SBR are invalid (Mayer, 2006, Winter, p. 8). Having the federal government legislate SBR is unusual and can be interpreted to have political overtones. Howe (2005) explains that research methodology is “unavoidably political by virtue of adopting certain aims, employing certain kinds of vocabularies and theories, and providing certain people the opportunity to be (or not to be) heard (p. 321).”

            It has been suggested that SBR was mandated to improve the credibility of educational research and thus to increase the likelihood of continued funding of education research (Odom et al., 2005, p. 144). Another possible reason for including the SBR requirement may have been to force educational researchers to focus research on programs that are known to improve student achievement, thus reducing the achievement gap. Reducing the achievement gap is morally correct; however, the SBR requirement may actually create a research gap. Some research questions regarding the achievement gap do not lend themselves well to SBR, leaving them unanswered. This paper reviews possible rationales for the SBR requirement, sources of possible variation not accounted for in SBR studies, and examples of non SBR research that have had major impact in the field of education. The SBR relationship between educational and medical research, as well as between multiple research methodologies brings the recommendation: The reauthorization of NCLB should embrace the reality that research questions alone should determine the research methodology, leaving no research methodology behind.


          The federal endorsement of SBR was influenced by the concept of evidenced based education (EBE) (Whitehurst, 2001). Whitehurst, then the Assistant Secretary for the Office of Educational Research and Improvement, defined a hierarchy of preferred research evidence that includes randomized trial as the first preference. Quasi-experimental evidence that includes pre and post evaluation ranks second. Co-relational studies with statistical controls rank third and those without statistical controls rank fourth. Case studies rank fifth. Whitehurst proposed that educational research should become more evidenced based, using randomized trials as the preferred research design. His proposal was the basis from which the NCLB/SBR mandate developed.

            The essence of this hierarchy of research methods is that randomization is the standard in federal policy. The belief is that by implementing a federal preference for SBR and EBE, effective reforms will be identified and the achievement gap will be eliminated (A. Smith, 2003, p. 126). Moreover, SBR research generally provides the effect size of successful reforms, a useful indicator for practitioners. A repository of the successful research findings would allow educational leaders to evaluate evidenced based research for their possible adoption.

            The What Works Clearinghouse (WWC) was designed to “provide educators, policymakers, researchers, and the public with a central and trusted source of scientific evidence of what works in education” (Institute of Education Sciences). WWC provides an intervention rating, improvement index and the extent of the evidence used in the various research projects. The website was designed to allow educational leaders to increase their awareness of programs that reduce the achievement gap and to select programs that might have a quantifiable effect in their schools. It is the belief that the NCLB requirement for SBR would insure that high quality reform research is available to educational leaders through the WWC.

            To emphasize the importance of SBR and EBE, the Institute of Educational Sciences (IES) was established in 2002 through the Education Sciences Reform Act. Whitehurst was named the first director of the IES. One of its missions is to fund research and provide guidance to decision makers on EBE and SBR results. Funding is only awarded to those proposing to use SBR (Hardman, 2003, p. 155).

SBR and Educational Research

          A basic research goal for educational research is to determine causation. The advantage of using SBR is that having a control group allows a better explanation of the variance in the data. Even with a control group, however, educational research must account for the intentionality of both students and teachers. This is not trivial. Intentionality refers to the ability of students and/or teachers to behave in a way that is not norm-regulated (Howe, 2005, p. 311). To determine the effectiveness of an intervention, researchers must assume that control and test group teachers deliver the same content in exactly the same way to the same mix of children. Research design options can minimize errors for these assumptions, but not without considerable efforts and collaboration (National Research Council, 2005, p. 30). Student populations vary widely in their ethnicity, socio-economic status, and family backgrounds. These factors can be controlled for. The student motivation, the intentionality of students and teachers and the variety of administrative philosophies make it impossible to control for all variability. Many of these constructs cannot be measured or scaled as a data point; however, each can strongly influence educational achievement. The variance inherent within each of the constructs does not lend itself well to SBR, even with randomization and a control group.

            Students are unique

            Variability among students is a challenge to account for in any research design that studies student achievement. Coleman (1966) demonstrated that minority status and SES were the major predictors of student achievement. A randomized controlled trial (RCT) can account for obvious factors such as ethnicity, gender, achievement status, SES, etc. These factors are reportable and usually maintained in existing school data bases. Preliminary analyses can demonstrate that the experimental and the control group come from similar populations, allowing researchers to assume that randomization will mitigate the effects. However, continued research on student achievement has identified a number of factors that impact achievement that are not easily quantified:


    Bandura and Barbaranelli (1996) identify parent expectation of student achievement as one of the most dominant factors related to student achievement. Children tend to perform as their parents expect.

    Spoken language, mobility (Mao, Whitsett, & Mellor, 1998), and special needs (Spooner & Browder, 2003) are strongly related to student achievement.

    External factors such as peer pressure, emotional illness, physical handicaps and social needs (Bandura & Barbaranelli, 1996) contribute to the complex process of learning on a daily basis.

    Intentionality varies daily and impacts student motivation and achievement (Howe, 2005).

    The interaction among students contributes to student achievement as well. Wenglinsky attributes 50% of student achievement to the classroom effect (2002). Student interactions with peers are included in his definition of classroom effect.

            Collecting the above student information for an RCT design would be extremely difficult and costly if sample size is going to be large enough to produce statistical power in the results. SBR can control for the easily measured factors but may not be able to successfully account for the constructs that further contribute to the uniqueness of students and achievement.

            Teachers are unique

Algozzine states that “most of what we know, believe, and practice in America’s classrooms is not the result of knowledge gained from clinical trials” (2003, p. 158). Case studies of classrooms or teachers support his statement (Skretta, 2007). In a given building, two classroom teachers teaching the same grade level will not deliver instruction in identical or predictable order unless the lesson is scripted. Teaching is a “dynamic, interpersonal process involving mutual influences, plots and subplots (Kennedy, Fall, 1999, p. 525).” In determining causation, the instructional method and order of instruction could have an impact on achievement. RCT research studying whether an instructional intervention causes improved student achievement may be working under the assumption that the school is similar to the assembly line with the student as the product and the teacher as the assembly line worker.


Teacher uniqueness causes a break in this industrial analogy at the point of the assembly of the product. If the learning lessons can be thought of as the parts of a final product, a classroom as an assembly line would have all learning lessons applied to the students in a predefined, identical order. The learning order would be predetermined just as the parts in an assembly line. The assembly line worker does not have a choice of what part to install when; she installs the part that is delivered on the line.


In most classrooms, however, the assembly line worker (the teacher) determines what works best and when it works best for her products (students). For example, one teacher may prefer to use grammar to teach writing; another may teach writing first and grammar later. In addition the teacher can make other changes in the assembly (learning) process according to ongoing formative assessments. Teacher decisions about instructional method change about every three or four minutes (Stiggins, 2004). The variety of instructional methods and the order of delivery selected reflect a few of the unique factors related to teaching. Such variety is difficult to identify, much less control. It is even possible that such instructional decisions are not included in the design or analysis of some RCT research.


Another source of variability associated with teacher uniqueness is the relationship between the teacher and student. This relationship is known to affect student achievement (Wenglinsky, 2002). The relationship can be a positive or negative effect, depending on factors ranging from the time of the day to the content being studied. By law, NCLB SBR/RCT studies investigate causation in relationship to academic growth. However, teachers strive to help each child grow socially and academically. Student social growth is not reported. Teachers giving more focus on social growth than academic growth create an unidentified source of variability in any SBR study that does not include student social growth in the design.


The reality is that not all teachers have found tools that work for all children; not all teachers manage the learning process of their students in the same manner. Their practice is continually evolving. The uniqueness of the classroom teacher embodies factors that are hard to define and test using SBR.



The elephant in the room that SBR can not address is context. Context is important as it relates to the student, staff and other practitioners (guidance counselors, teaching aides, administration) involved in education. Context includes factors such as teaching styles, values, school or district budgets, leadership, community support, student mobility rates, culture, local environmental factors, etc. Although some factors might be “controlled for” in a properly designed SBR design, many cannot be measured per se in a cost effective manner.


The researcher may not be aware of the contextual differences between test sites. Berliner documented a study that demonstrates the importance of context in interpretation of results (2002). Analysis of the data after a longitudinal RCT on instructional models of early childhood education found more variance within the test sites than between. The differences between the individual test sites were significant. However the context of each test site was a major contribution to the higher variances within the sites. The results could be misinterpreted if the contextual differences were not identified.


Contextual issues can mask the results of an SBR study, as in the study described above or additionally when the results are not consistent at other trial sites. Educational research can only begin to understand the effects of context when using different school sites. The natural sciences, like physics, find regularities in their field whereas educational researchers struggle to find regularities across the social sciences. SBR does not lend itself well to identify or quantify context.


For contextual information, qualitative research is the most effective methodology. In fact qualitative research studies can help explain quantitative research results by including the contextual data in the interpretation of results. Qualitative data allows one to drill down to capture and document the numerous levels of interaction in a classroom. Quantitative research can identify whether one variable causes change in another variable; qualitative research can determine how. Maxwell (2004) refers to the differences between qualitative and quantitative research as causal mechanisms and causal explanations. Quantitative research identifies causal mechanisms; qualitative, causal explanations. Useful educational research should identify both.


NCLB’s preferred research methodology and reference to SBR as a gold standard relegates context as noise in a variance theory approach to research. Replication through control groups is preferred. However, because context refers both to social and political factors, replication may do more harm than good in explaining causal mechanisms. Finding a control group that mimics context is very difficult. Researchers may miss or misstate the explanation of a relationship. Major differences in social/cultural environments can confound causal mechanisms; local politics can vary widely among schools. An interpretation of an SBR study might misstate an important causal explanation by ignoring context in the design of the research. “Solid scientific findings in one decade end up of little use in another decade because of changes in the social environment that invalidate the research or render it irrelevant” (Berliner, 2002, p. 5). Knowing context can render an intervention more applicable to one environment than another. For educational research, context is a known source of variability that SBR does not address easily.



Non SBR Educational Research


It is interesting to note that many studies that were not SBR have had a major impact in education. An earlier researcher, Erik Erikson, conducted ethnographic research in his cultural studies (1950). His work was instrumental in helping educators understand the complex stages of child development and is used today to better understand the urban child. The work also helped educators understand the importance of the interaction between children and adults in their lives: mutuality. Piaget preceded Erikson with his writings about the levels of learning. Neither of these major works used SBR. In the legal arena, both the Brown v. Board of Education ("Brown v. Board of Education," 1954) and the San Antonio Independent School District v. Rodriquez ("San Antonio Independent School District v. Rodriquez," 1973) cases used non SBR research to help prepare the cases. As a major argument the Brown case used a psychological study that implied that African American children have negative self-images; a case study of finances contributed to the success of the Rodriquez case. Historically speaking, these are examples of non-SBR research having had a major impact on our educational system.


Some researchers state that non SBR is the “hardest” methodology to implement since it requires such detailed documentation of the activities and environment (Berliner, 2002). It is more labor intensive and requires extended time for data collection. However, there is no demonstrable reason to cast dispersion on non SBR results. Research methodology is adapted to the research question. If the research question is best answered with non SBR design, then non SBR should be used. If the research question is a valid question for the practitioners, methodology restraints should not prevent a project from moving forward.


Non SBR and NCLB

Within NCLB there is an intervention that was not tested with SBR before including it within the mandate. If a school does not meet AYP for a specific period of time, NCLB requires the school to provide transportation for any of their students who choose to attend a local school that has met AYP. In other words, it provides those families in underperforming schools a choice in schooling. Moving a child to a different school may work for some students but not for others. In fact very few parents take advantages of this NCLB provision (Howell, 2006). Surveys indicate that families value the local, neighborhood school more than they value the AYP criteria; their primary interest for taking advantage of this option is in placing their children in a private school. Non SBR studies of the attractiveness of this alternative before implementation could have provided the legislators with a better understanding of how popular this mandate would be. As it is, the mandate is expensive to implement, taking money away from instruction for the very schools that need it the most.


Teasley and Berends document the effectiveness of the school choice mandate in NCLB. After three years, the children who did take advantage of the school choice option had not increased their academic achievement significantly more than those who did not elect to move (2007). The achievement data alone cannot answers the ‘why.’ NCLB authors should take note of this analysis. What was the research base for this initiative? What research should be done to determine why parents do not take advantage of this option? What research should be done to determine why achievement is not significantly better after the student moves? Qualitative data could provide an insight that will make any future NCLB programs more powerful. A less restricted mandate on research methodology is called for as the reauthorization of NCLB is discussed.



SBR and Medical Research as the Standard

The earlier reference to SBR bringing educational research to the medical mode must be addressed (A. Smith, 2003). Given the back-tracking that has recently occurred in the pharmaceuticals regarding Vioxx (Couzin, 2005), Tylenol (DeNoon, 2004), ibuprophen (S. Smith, 2005) and hormone replacement medication ("World Heath Institute (WHI) update--2002: WHI hormone program," 2002), for example, it is not an entirely desirable goal to strive to bring educational research to the medical model. SBR provided the results that allowed the above named drugs to be marketed initially. Later medical SBR studies tell consumers that these drugs are harmful. Consumers may be confused.


Critics of educational institutions complain about the propensity of fads and trends in delivering instruction (Maddux & Cummings, 2004). Similarly the pharmaceuticals can be criticized for creating trends in managing pain or menopause. The criticism in education is that school reforms are implemented that have not been tested using SBR design. NCLB proposes to remedy this perception by preferring (and thus funding) SBR design only. Although pharmaceuticals have more successes than failures and the medical field is a different context than education, experiences in the medical field imply that using SBR is not totally failsafe.


Both educational and medical fields suffer from longitudinal invalidation of results. For educational research to mimic the medical field, other types of research should be encouraged. Recently a medical drug trial only involved six participants (Sailor & Stowe, 2003). There was no control group. Algozzine reviewed four issues of the Journal of the American Medical Association to learn more about the research mode educational researchers are to use as a model (2003, p. 159). Of the five articles in each edition, only one was based on SBR as defined in NCLB. The others were illustrative research, and comparative research, with no randomized control groups. One wonders why the Journal of the American Medical Association is able to understand that all methodologies are to be respected. Had the originators of SBR and EBE never reviewed medical research to determine if SBR is actually the medical standard? Do they really believe that only one research methodology contributes to the development of knowledge?


A final note on the medical model as a standard involves the health care gap. Currently there is a clear gap in health status associated with lower socioeconomic citizens. Similarly the student achievement gap is highly correlated with lower socioeconomic status. If one goal of NCLB is to eradicate the achievement gap in student achievement, why would education look to the medical field for the appropriate scientific method to test interventions? The medical model is not successful in reducing the health status gap (Howe, 2004). Perhaps more open acceptance for all genres of research design would shed some much needed light on why the achievement gap and the health care gap continue to grow.



Non SBR Research Methods


Fraenkel describes the scientific method as a testing of ideas in a public arena (Fraenkel & Wallen, 2003, p. 6). By keeping the details in the public arena, the researcher provides enough information to have the experiment duplicated. There is no mention of control groups, randomization, qualitative or quantitative methods. The research question determines the methodology.


Generally speaking most research methods fall under the heading of quantitative or qualitative research. A quantitative researcher generally tries to establish relationships between variables, sometimes explaining causes of the relationships. A qualitative researcher embraces the multiple reality theory and tries to understand realities from the participant’s view point. A quantitative researcher wants to provide generalizations that can be applied to other situations; a qualitative researcher wants to truly document a particular situation. Both methodologies are popular within the field of education and other social sciences.


The unfortunate fallout from the NCLB and NCR report’s clear preference for SBR is that it appears that qualitative research (that does not include randomization or control groups) is relegated to an auxiliary role among scientific methods. Few social scientists aspire to be experts in an auxiliary, secondary arm of research. Quantitative methods are strongly related to rigorous statistical methods. Perhaps in the pursuit of the best possible results the NCLB authors determined that mathematical procedures possible in a quantitative design rank higher than those used in the majority of the qualitative methods. However, in conducting quantitative analyses using multivariate methods such as structural equation models or hierarchical linear models, it is important to note that including variables and combining variables is always based on a pre-existing theory. This theory must come from observations, past research results or field experiences. It is very likely that a good deal of this theory comes from qualitative studies. Furthermore, once the statistical analysis has been completed, the results cannot be interpreted correctly without the theory behind the research. The qualitative research helps explain the “why.” Once the results are explained, further research may be in order. This process requires that the quantitative and qualitative methods complement each other. Neither is auxiliary. Unfortunately, the existing federal initiative, NCLB, implies otherwise.


There are some cases of educational research on student achievement that do match well to SBR. Randomized control trials (RCT) are those pre-packaged, easy to implement sorts of interventions that are “teacher-proof.” “After Sputnik, Friedman [director of the New York Hall of Science] says there was a misguided attempt to "teacher-proof" the curriculum” (Seaborg, 2004). The idea was to develop scripted lessons that can be delivered by anyone. The teacher-proof philosophy is still alive. McKenzie (2003) references such interventions and explains how ineffective they are in the classroom. He warns against standardizing instruction in a teacher-proof environment. Although this may ameliorate some of the variance referenced earlier (teacher/student uniqueness), it also takes away the creativity of the teacher and the magic of the relationship between the teacher and the student. Not surprisingly, the companies that produce the pre-packaged interventions do a wonderful job of marketing through published lists of packages or strategies that supposedly work. No one knows if the student achievement gains documented translate into sustained gains.


An intervention that garnered numerous awards for improving student achievement is the Brazosport method (Cook, 2003). The method is a building wide intervention that scripts what the teachers say and do, making sure all students receive the same instruction. Every grade level is departmentalized. The method concentrates on instruction and removes the personal nature of teaching from the classroom (Karns, 2005). The prescribed classroom procedures and instructional methods do not include differentiation to help students at different levels of learning. A presentation given by classroom teachers of the Brazosport school district revealed a lack of systems to address students’ social needs and an extremely high pressured culture for students and teachers. Teacher turnover was high.


Teacher-proof methods do not deal with the social needs of the child or the teacher/student relationship. If the research that determines programs effective includes a contextual strand, perhaps these issues will be identified and addressed. Educational leaders considering intervention should have the information about the contextual changes brought about by an intervention that improves student achievement. Possible school cultural changes should be respected. The Brazosport intervention did improve student scores, but at what cost? There is no evidence that the students are better prepared to be contributing members of our society; the analysis involved test scores alone. Educational leaders need both the qualitative and quantitative results of a research project to be able to make informed decisions about programs that show increased student achievement.


Two relatively new types of research becoming more popular within the field of social science are action research and meta-analysis. Meta-analysis combines a variety of studies on a specific topic and uses statistical methods to produce an overall conclusion about the topic. It is a rigorous analysis of all data collected from the numerous studies. Results from a meta-analysis can allow numerous studies from different time periods to provide further insight into a research question (Fraenkel & Wallen, 2003, p. 88). The attraction of this method is that it recycles data for further studies. It is cost efficient as well as manpower efficient. “An emphasis on meta-analysis has often revealed that we actually have more stable and useful findings than is apparent” (Stanovich & Stanovich, 2003, p. 18). It is unclear if meta-analysis will be considered SBR if one of the data sets did not come from an RCT study.


Action research is used when there is a problem to be solved or a practice to be better understood; the research is based on the interest of the researcher. The researcher uses straightforward steps to identify the goal and methods to reach the goal. Implementation brings about a change that attempts to solve the problem. Analysis determines the level of success. The process is iterative. As the questions are answered, new questions arise and new methods are used to answer those questions. The population studied is not randomized and there are usually no control groups.


Some action researchers have explained that action research, too, can be considered SBR (Dixon-Krauss, 2003). He suggests replacing the preference for research that can be replicated and generalized to a preference for “research that goes through a process of transformation and adaptation into contexts” (Dixon-Krauss, 2003, p. 10). This would allow for the continuous state of transformation that represents action research. It would also increase the likelihood that action research could be considered SBR.


It is hard to see how action research alone meets the needs of educational leaders as they search for interventions that improve student achievement. Including action research in a mixed method model makes more sense. Clearly formalizing the research question collaboratively with practitioners is the first step; identifying the methodologies appropriate for the research question is the second step. Action research may be a methodology appropriate in a mixed-methods approach to the research question.





The impact of funding only SBR is immeasurable with respect to future educational research because it limits funding to study research questions that do not fit the SBR methodology. Educational researchers are responding to the NCLB mandate by designing more SBR research when that methodology is the most appropriate for the research questions. Those research questions not aligned with SBR are left untouched, possibly creating a research gap.


The federal government should not define a single research methodology as a gold standard for educational research. The singular emphasis on SBR and EBE as the preferred research methodology is uncomfortable. One rarely sees such a blatant disregard for other methods of research. Mandated research methods do not insure quality research results; they may create research gaps. The practice of education has unique characteristics that require more than one genre of research. The first step in research is to define the question; only then can one determine the methodology that best aligns with the research question. Mandating SBR ignores this process and determines the methodology before the research question is finalized.


Social scientists should have the freedom to argue and discuss educational issues. They should be encouraged to investigate a topic using the most appropriate method. Rational debate can help the social sciences grow. Controlling research methodology will stifle new directions. Raudenbush (2005) describes the ideal scientific inquiry as well-integrated and methodologically diverse. An NCLB mandate for SBR does not allow for this diversity.


Just because SBR is the standard for the medical field does not mean it should be the standard for educational research. Kuhn (1970) documents the reluctance of scientists to consider alternative paradigms for research in the move from phlogiston science to oxygen science. Scientists could not embrace oxygen science due to their reluctance to accept alternative research methods. In Kuhn’s view, scientists are not neutral in observing the world; they become comfortable with their methods and resist change. He warns the scientific community not to assume one scientific method is superior to another but to consider multiple perspectives in scientific inquiry in order to continue to discover new knowledge. He encourages tolerance in the scientific world, encouraging scientists to be more tolerant of other methods. Endorsing only SBR research for educational research is contrary to this and from Kuhn’s historical perspective, it may be misguided and rather short sighted.

Citation: Franco, Suzanne (2007). Reauthorization of NCLB: Time to Reconsider the Scientifically Based Research Requirement. Nonpartisan Education Review / Essays, 3(6). Retrieved [date] from http://www.nonpartisaneducation.org/Review/Essays/v3n6.pdf

Access this essay in .pdf format



Algozzine, B. (2003). Scientifically based research: Who let the dogs out? Research & Practice for Persons with Severe Disabilities, 28(3), 156-160.

Bandura, A., & Barbaranelli, C. (1996). Multifaceted impact of self-efficacy beliefs on academic functioning. Child Development, 67(3), 1206-1233.

Berliner, D. C. (2002). Educational research: The hardest science of all. Educational Researcher, 31(8), 18-20.

Brown v. Board of Education 483 (347 U. S. 1954).

Coleman, J. S., Campbell, E. Q., Hobson, C. J., McPartland, J., Mood, A. M., Weinfeld, F. D., et al. (1966). Equality of educational opportunity. Washington, D.C.: U. S. Government Printing Office.

Cook, G. (2003). Turnaround in Texas: How one district closed its minority achievement gap. American School Board Journal, 190(2), 22-25.

Couzin, J. (2005). Echoing other cases, NEJM says Vioxx safety data withheld. Science, 310(5755), 1755.

DeNoon, D. J. (2004). Tylenol safety debated -- again: Recommended dose safe, but overdose danger debatable. Retrieved August 12, 2007, 2007, from http://www.webmd.com/pain-management/news/20040723/tylenol-safety-debated-again

Dixon-Krauss, L. A. (2003). Does action research count as scientifically-based research? A Vygotskian mediational response. Paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL.

Eisenhart, M. (2005). Hammers and saws for the improvement of educational research. Educational Theory, 55(3), 245-261.

Erikson, E. H. (1950). Childhood and society. New York: W. W. Norton & Company, Inc.

Fraenkel, J. R., & Wallen, N. E. (2003). How to design and evaluate research in education (5th ed.). New York, NY: McGraw Hill Higher Education.

Hardman, M. L. (2003). Put me in coach: A commentary on the rpds exchange. Research & Practice for Persons with Severe Disabilities, 28(3), 153-155.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10(1), 42-61.

Howe, K. R. (2005). The question of education science: Experimentism versus experimentalism. Educational Theory, 55(3), 307-321.

Howell, W. (2006). Switching schools? A closer look at parents' Initial Interest in and knowledge about the choice provisions of No Child Left Behind. Peabody Journal of Education, 81(1), 140-179.

Institute of Education Sciences. What works clearinghouse: A trusted source of scientific evidence of what works in education. Retrieved August 23, 2007, from http://www.whatworks.ed.gov/

Karns, M. (2005). Innoculating against low achievement: Our job as educators is to mitigate the effects of adversity by accelerating achievements and building resiliency. Retrieved July 17, 2005, from http://www.findarticles.com/p/articles/mi_m0HUL/is_3_34/ai_n8967608

Kennedy, M. (Fall, 1999). A test of some common contentions about educational research. American Education Research Journal, 36(3), 511-541.

Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago, IL: University of Chicago Press.

Maddux, C., & Cummings, R. (2004). Fad, fashion, and the weak role of theory and research in information technology in education. Journal of Technology and Teacher Education, 12(4), 511-533.

Mao, M. X., Whitsett, M. D., & Mellor, L. T. (1998). Student mobility, academic performance, and school accountability. ERS Spectrum, 16(1), 3-15.

Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific Inquiry in education. Educational Researcher, 33(2), 3-11.

Mayer, D. E. (2006, Winter). Research funding in the U.S.: Implications for teacher education research. Teacher Education Quarterly, 5-18.

McKenzie, J. (2003). Children are not hamburgers. [Electronic Version]. No Child Left, 1. Retrieved May 27, 2007 from http://nochildleft.com/2003/sept03burgers.html.

National Research Council. (2002). Scientific research in education. In R. J. Shavelson & L. Towne (Eds.). Center for Education, Division of Behavioral and Social Sciences and Education, Committee on Scientific Principles for Education Research. Washington, DC: National Academy Press.

National Research Council. (2005). Advancing scientific research in education. Committee on Research in Education. In L. Towne, L. L. Wise & T. M. Winters (Eds.). Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

No Child Left Behind Act, 1425 Cong. Rec.(2001).

Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H., Thompson, B., & Harris, K. R. (2005). Research in special education: Scientific methods and evidence-based practices. Exceptional Children, 71(2), 137-148.

Raudenbush, S. W. (2005). Learning from attempts to improve schooling: The contribution of methodological diversity. Educational Researcher, 34(5), 25-31.

Sailor, W., & Stowe, M. (2003). The relationship of inquiry to public policy. Research & Practice for Persons with Severe Disabilities, 28(3), 148-152.

San Antonio Independent School District v. Rodriquez 411 (U.S. 1 1973).

Seaborg, G. (2004). Sputniks's legacy: Teachers know your stuff. Retrieved August 22, 2007, from http://whyfiles.org/047sputnik/main3.html

Skretta, J. (2007). Using walk-throughs to gather data for school improvement. Principal Leadership, 7(9), 16-23.

Smith, A. (2003). Scientifically based research and evidence-based education: A federal policy context. Research & Practice for Persons with Severe Disabilities, 28(3), 126-132.

Smith, S. (2005, June 10). Scientists warn on ibuprofen. Boston Globe.

Spooner, F., & Browder, D. M. (2003). Scientifically based research in education and students with low incidence disabilities. Research & Practice for Persons with Severe Disabilities, 28(3), 117-125.

Stanovich, P. J., & Stanovich, K. E. (2003, August 25, 2007). Using research and reason in education: How teachers can use scientifically based research to make curricular and instructional decisions. Retrieved August 20, 2007, from http://www.nifl.gov/partnershipforreading/publications/html/stanovich/

Stiggins, R. (2004). New mission, new beliefs: Assessment for learning. Portland, OR: Assessment Training Institute.

Teasley, B., & Berends, M. (2007). A national examination of the No Child Left Behind school choice policy Paper presented at the American Educational Research Association, April, Chicago, Illinois.

U. S. Department of Education. (n.d.). Proven methods: Questions and answers on no child left behind: Doing what works. Retrieved May 27, 2007, from http://www.ed.gov/nclb/methods/whatworks/doing.html

Viadero, D. (2004). Call for 'scientifically based' programs debated. Education Week, 23(28), 10.

Wenglinsky, H. (2002). How schools matter: The link between teacher classroom practices and student academic performance. Education Policy Analysis Archives, 10(12).

Whitehurst, G. J. (2001). Evidence-based education. Paper presented at the U.S. Department of Education's Improving America's Schools Conference, San Antonio, TX.

World Heath Institute (WHI) update--2002: WHI hormone program. (2002). Retrieved August 23, 2007, from http://www.nhlbi.nih.gov/health/women/upd2002.htm