HOME:  Dismissive Reviews in Education Policy Research        
  Author Co-author(s) Dismissive Quote type Title Source Link1 Funders Notes Notes2
1 Michael Hout, Stuart W. Elliot, Editors Paul Hill, Thomas J. Kane, Daniel M. Koretz, Susanna Loeb, Lorrie A. Shepard, Brian Stecher, et al. "Unfortunately, there were no other studies available that would have allowed us to contrast the overall effect of state incentive programs predating NCLB…" p. 4-6 Dismissive Incentives and Test-Based Accountability in Education, 2011 Board on Testing and Assessment, National Research Council https://www.nap.edu/catalog/12521/incentives-and-test-based-accountability-in-education National Research Council funders; "The study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation" Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); and Hurlock (1925).   *Covers many studies; study is a research review, research synthesis, or meta-analysis.  Other researchers who, prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. "Others have considered the role of tests in incentive programs.  These researchers have included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International organizations, such as the World Bank or the Asian Development Bank, have studied the effects of testing on education programs they sponsor.  Researchers have included Somerset, Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman, Snyder, and Pronaratna.
Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones."
What about:  Brooks-Cooper, C. (1993), Brown, S. M. & Walberg, H. J. (1993), Heneman, H. G., III. (1998), Hurlock, E. B. (1925), Jones, J. et al. (1996), Kazdin, A. & Bootzin, R. (1972),
Kelley, C. (1999), Kirkpatrick, J. E. (1934), O’Leary, K. D. & Drabman, R. (1971), Palmer, J. S. (2002), Richards, C. E. & Shen, T. M. (1992), .Rosswork, S. G. (1977),  Staats, A. (1973), Tuckman, B. W. (1994),  Tuckman, B. W. & Trimble, S. (1997), Webster, W. J., Mendro, R. L., Orsack, T., Weerasinghe, D. & Bembry, K. (1997)
2 Michael Hout, Stuart W. Elliot, Editors Paul Hill, Thomas J. Kane, Daniel M. Koretz, Susanna Loeb, Lorrie A. Shepard, Brian Stecher, et al. "Test-based incentive programs, as designed and implemented in the programs that have been carefully studied have not increased student achievement enough to bring the United States close to the levels of the highest achieving countries.", p. 4-26 Denigrating Incentives and Test-Based Accountability in Education, 2011 Board on Testing and Assessment, National Research Council https://www.nap.edu/catalog/12521/incentives-and-test-based-accountability-in-education National Research Council funders; "The study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation" Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); and Hurlock (1925).   *Covers many studies; study is a research review, research synthesis, or meta-analysis.  Other researchers who, prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. Others have considered the role of tests in incentive programs.  These researchers have included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International organizations, such as the World Bank or the Asian Development Bank, have studied the effects of testing on education programs they sponsor.  Researchers have included Somerset, Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman, Snyder, and Pronaratna.
Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones.
What about:  Brooks-Cooper, C. (1993), Brown, S. M. & Walberg, H. J. (1993), Heneman, H. G., III. (1998), Hurlock, E. B. (1925), Jones, J. et al. (1996), Kazdin, A. & Bootzin, R. (1972),
Kelley, C. (1999), Kirkpatrick, J. E. (1934), O’Leary, K. D. & Drabman, R. (1971), Palmer, J. S. (2002), Richards, C. E. & Shen, T. M. (1992), .Rosswork, S. G. (1977),  Staats, A. (1973), Tuckman, B. W. (1994),  Tuckman, B. W. & Trimble, S. (1997), Webster, W. J., Mendro, R. L., Orsack, T., Weerasinghe, D. & Bembry, K. (1997)
3 Michael Hout, Stuart W. Elliot, Editors Paul Hill, Thomas J. Kane, Daniel M. Koretz, Susanna Loeb, Lorrie A. Shepard, Brian Stecher, et al. "Despite using them for several decades, policymakers and educators do not yet know how to use test-based incentives to consistently generate positive effects on achievement and to improve education." p .5-1 Dismissive Incentives and Test-Based Accountability in Education, 2011 Board on Testing and Assessment, National Research Council https://www.nap.edu/catalog/12521/incentives-and-test-based-accountability-in-education National Research Council funders; "The study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation" Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); and Hurlock (1925).   *Covers many studies; study is a research review, research synthesis, or meta-analysis.  Other researchers who, prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. "Others have considered the role of tests in incentive programs.  These researchers have included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International organizations, such as the World Bank or the Asian Development Bank, have studied the effects of testing on education programs they sponsor.  Researchers have included Somerset, Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman, Snyder, and Pronaratna.
Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones."
What about:  Brooks-Cooper, C. (1993), Brown, S. M. & Walberg, H. J. (1993), Heneman, H. G., III. (1998), Hurlock, E. B. (1925), Jones, J. et al. (1996), Kazdin, A. & Bootzin, R. (1972),
Kelley, C. (1999), Kirkpatrick, J. E. (1934), O’Leary, K. D. & Drabman, R. (1971), Palmer, J. S. (2002), Richards, C. E. & Shen, T. M. (1992), .Rosswork, S. G. (1977),  Staats, A. (1973), Tuckman, B. W. (1994),  Tuckman, B. W. & Trimble, S. (1997), Webster, W. J., Mendro, R. L., Orsack, T., Weerasinghe, D. & Bembry, K. (1997)
4 Michael Hout, Stuart W. Elliot, Editors Paul Hill, Thomas J. Kane, Daniel M. Koretz, Susanna Loeb, Lorrie A. Shepard, Brian Stecher, et al. "The general lack of guidance coming from existing studies of test-based incentive programs in education…" Dismissive Incentives and Test-Based Accountability in Education, 2011 Board on Testing and Assessment, National Research Council https://www.nap.edu/catalog/12521/incentives-and-test-based-accountability-in-education National Research Council funders; "The study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation" Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); and Hurlock (1925).   *Covers many studies; study is a research review, research synthesis, or meta-analysis.  Other researchers who, prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. "Others have considered the role of tests in incentive programs.  These researchers have included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International organizations, such as the World Bank or the Asian Development Bank, have studied the effects of testing on education programs they sponsor.  Researchers have included Somerset, Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman, Snyder, and Pronaratna.
Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones."
What about:  Brooks-Cooper, C. (1993), Brown, S. M. & Walberg, H. J. (1993), Heneman, H. G., III. (1998), Hurlock, E. B. (1925), Jones, J. et al. (1996), Kazdin, A. & Bootzin, R. (1972),
Kelley, C. (1999), Kirkpatrick, J. E. (1934), O’Leary, K. D. & Drabman, R. (1971), Palmer, J. S. (2002), Richards, C. E. & Shen, T. M. (1992), .Rosswork, S. G. (1977),  Staats, A. (1973), Tuckman, B. W. (1994),  Tuckman, B. W. & Trimble, S. (1997), Webster, W. J., Mendro, R. L., Orsack, T., Weerasinghe, D. & Bembry, K. (1997)
5 Diana Pullin (Chair)
Joan Herman, Scott Marion, Dirk Mattson, Rebecca Maynard, Mark Wilson,  "However, there have been very few studies of how interim assessments are actually used by individual teachers in classrooms, by principals, and by districts or of their impact on student achievement." p. 6 Dismissive Best Practices for State Assessment Systems, Part I Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards; Center for Education; Division of Behavioral and Social Sciences and Education; National Research Council https://www.nap.edu/catalog/12906/best-practices-for-state-assessment-systems-part-i-summary-of "With funding from the James B. Hunt, Jr. Institute for Educational Leadership and Policy, as well as additional support from the Bill & Melinda Gates Foundation and the Stupski Foundation, the National Research Council (NRC) planned two workshops designed to explore some of the possibilities for state assessment systems." See, for example:  https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm
6 Diana Pullin (Chair)
Joan Herman, Scott Marion, Dirk Mattson, Rebecca Maynard, Mark Wilson,  "Research indicates that the result has been emphasis on lower-level knowledge and skills and very thin alignment with the standards. For example, Porter, Polikoff, and Smithson (2009) found very low to moderate alignment between state assessments and standards—meaning that large proportions of content standards are not covered on the assessments (see also Fuller et al., 2006; Ho, 2008). p. 10 Denigrating Best Practices for State Assessment Systems, Part I Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards; Center for Education; Division of Behavioral and Social Sciences and Education; National Research Council https://www.nap.edu/catalog/12906/best-practices-for-state-assessment-systems-part-i-summary-of "With funding from the James B. Hunt, Jr. Institute for Educational Leadership and Policy, as well as additional support from the Bill & Melinda Gates Foundation and the Stupski Foundation, the National Research Council (NRC) planned two workshops designed to explore some of the possibilities for state assessment systems." Pretty difficult to believe given how standards-based test items are developed -- directly from the standards.
7 Diana Pullin (Chair)
Joan Herman, Scott Marion, Dirk Mattson, Rebecca Maynard, Mark Wilson,  "Another issue is that the implications of computer-based approaches for validity and reliability have not been thoroughly evaluated." p. 40 Dismissive Best Practices for State Assessment Systems, Part I Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards; Center for Education; Division of Behavioral and Social Sciences and Education; National Research Council https://www.nap.edu/catalog/12906/best-practices-for-state-assessment-systems-part-i-summary-of "With funding from the James B. Hunt, Jr. Institute for Educational Leadership and Policy, as well as additional support from the Bill & Melinda Gates Foundation and the Stupski Foundation, the National Research Council (NRC) planned two workshops designed to explore some of the possibilities for state assessment systems."  
8 Diana Pullin (Chair)
Joan Herman, Scott Marion, Dirk Mattson, Rebecca Maynard, Mark Wilson,  "For current tests, he [Lauress Wise] observed, there is little evidence that they are good indicators of instructional effectiveness or good predictors of students’ readiness for subsequent levels of instruction." Dismissive Best Practices for State Assessment Systems, Part I Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards; Center for Education; Division of Behavioral and Social Sciences and Education; National Research Council https://www.nap.edu/catalog/12906/best-practices-for-state-assessment-systems-part-i-summary-of "With funding from the James B. Hunt, Jr. Institute for Educational Leadership and Policy, as well as additional support from the Bill & Melinda Gates Foundation and the Stupski Foundation, the National Research Council (NRC) planned two workshops designed to explore some of the possibilities for state assessment systems."  
9 L. Shepard, J. Hannaway, E. Baker (Eds.) Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd, Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan Schoenfeld, David Stern, William Trent, Mark Wilson
"Research on effective schools, for example, documents that schools with a sense of common purpose and emphasis on academics can produce student achievement well above demographic predictions. But, this research often relied on case studies of exceptional schools." p.6 Denigrating Education Policy White Paper: Standards, Assessments, and Accountability, 2009 National Academcy of Education http://files.eric.ed.gov/fulltext/ED531138.pdf   But, often it focused on ordinary schools.The research base on effective school is gargantuan in number and size.
10 L. Shepard, J. Hannaway, E. Baker (Eds.) Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd, Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan Schoenfeld, David Stern, William Trent, Mark Wilson
"Although state standards have been in use since the late 1980s, and scholarly work on progressions has made significant strides in recent years, there has been little attention in the United States to incorporating the most up-to-date thinking about cognition and learning progressions into curriculum materials and assessments." pp.7-8 Dismissive, Denigrating Education Policy White Paper: Standards, Assessments, and Accountability, 2009 National Academcy of Education http://files.eric.ed.gov/fulltext/ED531138.pdf    
11 L. Shepard, J. Hannaway, E. Baker (Eds.) Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd, Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan Schoenfeld, David Stern, William Trent, Mark Wilson
"There is at present no direct way to measure changes in instruction that would withstand the requirements of high-stakes use. Although research on classroom observational instruments is limited, a great deal is known from cognitive science research." p.13 Dismissive Education Policy White Paper: Standards, Assessments, and Accountability, 2009 National Academcy of Education http://files.eric.ed.gov/fulltext/ED531138.pdf   There exist an enormous number of such observational studies. See, for example:  https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm
12 Douglas N. Harris Lori L. Taylor, Amy A. Levine, William K. Ingle, Leslie McDonald "However, previous studies understate current costs by focusing on costs before NCLB was put in place and by excluding important cost categories." Denigrating The Resource Costs of Standards, Assessments, and Accountability report to the National Research Council   National Research Council funders No, they did not leave out important cost categories; Harris' study deliberately exagerates costs. See pages 3-10:  https://nonpartisaneducation.org/Review/Essays/v10n1.pdf
13 Jay P. Heubert   "For Heubert, it is very much an open question what the effect of standards and high-stakes testing will be." p.83 Dismissive Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education." See, for example, https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930).
14 Jay P. Heubert   ""There is little evidence to suggest that exit exams in current use have been validated properly against the defined curriculum and actual instruction; rather, it appears that many states have not taken adequate steps to validate their assessment instruments, and the proper studies would reveal important weaknesses." pp.83-84 Dismissive Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education." Relevant studies of the effects of tests and/or accountability program on motivation and instructional practice include those of the *Southern Regional Education Board (1998); Johnson (1998); Schafer, Hultgren, Hawley, Abrams Seubert & Mazzoni (1997); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); Tuckman & Trimble (1997); Clarke & Stephens (1996); Zigarelli (1996); Stevenson, Lee, et al. (1995); Waters, Burger & Burger (1995); Egeland (1995); Prais (1995); Tuckman (1994); Ritchie & Thorkildsen (1994); Brown & Walberg, (1993); Wall & Alderson (1993); Wolf & Rapiau (1993); Eckstein & Noah (1993); Chao-Qun & Hui (1993); Plazak & Mazur (1992); Steedman (1992); Singh, Marimutha & Mukjerjee (1990); *Levine & Lezotte (1990); O’Sullivan (1989); Somerset (1988); Pennycuick & Murphy (1988); Stevens (1984); Marsh (1984); Brunton (1982); Solberg (1977); Foss (1977); *Kirkland (1971); Somerset (1968); Stuit (1947); and Keys (1934).  *Covers many studies; study is a research review, research synthesis, or meta-analysis. "Others have considered the role of tests in incentive programs.  These researchers have included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International organizations, such as the World Bank or the Asian Development Bank, have studied the effects of testing on education programs they sponsor.  Researchers have included Somerset, Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman, Snyder, and Pronaratna.
Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones."
15 CHRISTOPHER EDLEY, JR.   "To be sure, there is a largely unexamined empirical assertion underlying the arguments of high-stakes proponents: attaching high-stakes consequences for the students provides an indispensable, otherwise unobtainable incentive for students, parents, and teachers to pay careful attention to learning tasks. For the countless parents, policy makers, and observers who approach these debates as instrumentalists, the accuracy of this assertion is a central mystery as we struggle to close the education gap." p.128 Dismissive "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education." Relevant studies of the effects of tests and/or accountability program on motivation and instructional practice include those of the *Southern Regional Education Board (1998); Johnson (1998); Schafer, Hultgren, Hawley, Abrams Seubert & Mazzoni (1997); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); Tuckman & Trimble (1997); Clarke & Stephens (1996); Zigarelli (1996); Stevenson, Lee, et al. (1995); Waters, Burger & Burger (1995); Egeland (1995); Prais (1995); Tuckman (1994); Ritchie & Thorkildsen (1994); Brown & Walberg, (1993); Wall & Alderson (1993); Wolf & Rapiau (1993); Eckstein & Noah (1993); Chao-Qun & Hui (1993); Plazak & Mazur (1992); Steedman (1992); Singh, Marimutha & Mukjerjee (1990); *Levine & Lezotte (1990); O’Sullivan (1989); Somerset (1988); Pennycuick & Murphy (1988); Stevens (1984); Marsh (1984); Brunton (1982); Solberg (1977); Foss (1977); *Kirkland (1971); Somerset (1968); Stuit (1947); and Keys (1934).  *Covers many studies; study is a research review, research synthesis, or meta-analysis.
16 CHRISTOPHER EDLEY, JR.   "There has been too little attention in policy and political debates to the rate of school improvement." p.130 Dismissive "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education."  
17 CHRISTOPHER EDLEY, JR.   "Yet, curiously, there is little public debate and little research about the rate of change we should require of school reform efforts in order to win the continuing support of voters and taxpayers." p.130 Dismissive "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education."  
18 CHRISTOPHER EDLEY, JR.   "Certainly much research remains to be done—conceptualized, even—in the continuing effort to give educators and parents the insights needed to promote learning." p131 Dismissive "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education."  
19 CHRISTOPHER EDLEY, JR.   "For many serious policy analysts, the choice issue is uninteresting because there is so little good science to digest, the methodological challenges seem all but imponderable, and purists insist that there should belarge-scale randomized experiments, which seem impossible on practical grounds. The few studies to date have feuled a firestorm of controversy out of proportion to the available evidence." p.133
Dismissive "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education."  
20 CHRISTOPHER EDLEY, JR.   "Looking to the future, this situation must not stand. … We must have research of sufficient quantity and quality to match the growing challenge that this represents in so many communities." p.136 Dismissive "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education."  
21 CHRISTOPHER EDLEY, JR.   "The policy and political question is how much weight to accord them in light of the science. The science is too thin." p.137 Dismissive "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education."  
22 Michael A. Rebell   "Even though democratic theory in the United States in recent decades has extolled the concept of the informed citizen, there has, in fact, been little discussion, let alone analysis, of the specific skills individuals need to carry out the functions of such a citizen." p.244 Dismissive "Educational Adequacy, Democracy, and the Courts,"  Achieving High Standards for All National Research Council   "This project was funded by grant R215U990023 from the Office of Educational Research andImprovement (OERI) of the United States Department of Education."  
23 Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard,  "Despite their importance and widespread use, little is known about the impact of these tests on states’ recent efforts to improve teaching and learning." Dismissive Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.14 Committee on Assessment and Teacher Quality   Board on Testing and Assessment, National Research Council Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent.
24 Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard,  "Little information about the technical soundness of teacher licensure tests appears in the published literature." Dismissive Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.14 Committee on Assessment and Teacher Quality   Board on Testing and Assessment, National Research Council Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent.
25 Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard,  "Little research exists on the extent to which licensure tests identify candidates with the knowledge and skills necessary to be minimally competent beginning teachers." Dismissive Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.14 Committee on Assessment and Teacher Quality   Board on Testing and Assessment, National Research Council Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent.
26 Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard,  "Information is needed about the soundness and technical quality of the tests that states use to license their teachers." Dismissive Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.14 Committee on Assessment and Teacher Quality   Board on Testing and Assessment, National Research Council Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent.
27 Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard,  "policy and practice on teacher licensure testing in the United States are nascent and evolving" Dismissive Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.17 Committee on Assessment and Teacher Quality   Board on Testing and Assessment, National Research Council Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent.
28 Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard,  "The paucity of data and these methodological challenges made the committee’s examination of teacher licensure testing difficult." Dismissive Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.17 Committee on Assessment and Teacher Quality   Board on Testing and Assessment, National Research Council Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent.
29 Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard,  "There were a number of questions the committee wanted to answer but could not, either because they were beyond the scope of this study, the evidentiary base was inconclusive, or the committee’s time and resources were insufficient." Dismissive Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.17 Committee on Assessment and Teacher Quality   Board on Testing and Assessment, National Research Council Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent.
30 Sheila Barron   "Although this is a topic researchers ... talk about often, very little has been written about the difficulties secondary analysts confront." p.173 Dismissive Difficulties associated with secondary analysis of NAEP data, chapter 9 Grading the Nation's Report Card, National Research Council, 2000 https://www.nap.edu/catalog/9751/grading-the-nations-report-card-research-from-the-evaluation-of National Research Council funders In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000),  Stancavage, Et al (2002),  Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004). 
31 Sheila Barron   "...few articles have been written that specifically address the difficulties of using NAEP data." p.173 Dismissive Difficulties associated with secondary analysis of NAEP data, chapter 9 Grading the Nation's Report Card, National Research Council, 2000 https://www.nap.edu/catalog/9751/grading-the-nations-report-card-research-from-the-evaluation-of National Research Council funders In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000),  Stancavage, Et al (2002),  Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004). 
32 Jay P. Heubert, Robert M. Hauser, Eds.   "A growing body of research suggests that tests often do in fact change school and classroom practices (Corbett & Wilson, 1991; Madaus, 1988; Herman & Golan 1993; Smith & Rottenberg, 1991)." p.29 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Rubbish. Entire books dating back a century were written on the topic, for example:  C.C. Ross, Measurement in Today’s Schools, 1942;  G.M. Ruch, G.D. Stoddard, Tests and Measurements in High School Instruction, 1927;  C.W. Odell, Educational Measurement in High School, 1930. Other testimonies to the abundance of educational testing and empirical research on test use starting in the first half of the twentieth century can be found in Lincoln & Workman 1936, 4, 7; Butts 1947, 605; Monroe 1950, 1461; Holman & Docter 1972, 34; Tyack 1974, 183; and Lohman 1997, 88.
33 Jay P. Heubert, Robert M. Hauser, Eds.   "A growing body of research suggests that tests often do in fact change school and classroom practices (Corbett & Wilson, 1991; Madaus, 1988; Herman & Golan 1993; Smith & Rottenberg, 1991)." p.29 Denigrating High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Rubbish. Entire books dating back a century were written on the topic, for example:  C.C. Ross, Measurement in Today’s Schools, 1942;  G.M. Ruch, G.D. Stoddard, Tests and Measurements in High School Instruction, 1927;  C.W. Odell, Educational Measurement in High School, 1930. Other testimonies to the abundance of educational testing and empirical research on test use starting in the first half of the twentieth century can be found in Lincoln & Workman 1936, 4, 7; Butts 1947, 605; Monroe 1950, 1461; Holman & Docter 1972, 34; Tyack 1974, 183; and Lohman 1997, 88.
34 Jay P. Heubert, Robert M. Hauser, Eds.   "Most standards-based assessments have only recently been implemented or are still being developed. Consequently, it is too early to determine whether they will produce the intended effects on classroom instruction." p.36 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930).
35 Jay P. Heubert, Robert M. Hauser, Eds.   "A recent review of the available research evidence by Mehrens (1998) reaches several interim conclusions. Drawing on eight studies...." p.36 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Just some of the relevant pre-2008 studies of the effects of minimum-competency or exit exams and the problems with a single passing score include those of Alvarez, Moreno, & Patrinos (2007); Grodsky & Kalogrides (2006); Audette (2005); Orlich (2003); StandardsWork (2003); Meisels, et al. (2003); Braun (2003); Rosenshine (2003); Tighe, Wang, & Foley (2002); Carnoy & Loeb (2002); Baumert & Demmrich (2001); Rosenblatt & Offer (2001); Phelps (2001); Toenjes, Dworkin, Lorence, & Hill (2000); Wenglinsky (2000); Massachusetts Finance Office (2000); DeMars (2000); Bishop (1999, 2000, 2001, & 2004); Grissmer & Flanagan(1998); Strauss, Bowes, Marks, & Plesko (1998); Frederiksen (1994); Ritchie & Thorkildsen (1994); Chao-Qun & Hui (1993); Potter & Wall (1992); Jacobson (1992); Rodgers, et al. (1991); Morris (1991); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Winfield (1987); Koffler (1987); Losack (1987); Marshall (1987); Hembree (1987); Mangino, Battaille, Washington, & Rumbaut (1986); Michigan Department of Education (1984); Ketchie (1984); Serow (1982); Indiana Education Department (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); Down(2) (1979); Wellisch (1978); and Findley (1978).
36 Jay P. Heubert, Robert M. Hauser, Eds.   "Although there are no national data summarizing how local districts use standardized tests in certifying students, we do know that serveral of the largest school systems have begun to use test scores in determining grade-to-grade promotion (Chicago) or are considering doing so (New York City, Boston)." p.37 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Just some of the relevant pre-2008 studies of the effects of minimum-competency or exit exams and the problems with a single passing score include those of Alvarez, Moreno, & Patrinos (2007); Grodsky & Kalogrides (2006); Audette (2005); Orlich (2003); StandardsWork (2003); Meisels, et al. (2003); Braun (2003); Rosenshine (2003); Tighe, Wang, & Foley (2002); Carnoy & Loeb (2002); Baumert & Demmrich (2001); Rosenblatt & Offer (2001); Phelps (2001); Toenjes, Dworkin, Lorence, & Hill (2000); Wenglinsky (2000); Massachusetts Finance Office (2000); DeMars (2000); Bishop (1999, 2000, 2001, & 2004); Grissmer & Flanagan(1998); Strauss, Bowes, Marks, & Plesko (1998); Frederiksen (1994); Ritchie & Thorkildsen (1994); Chao-Qun & Hui (1993); Potter & Wall (1992); Jacobson (1992); Rodgers, et al. (1991); Morris (1991); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Winfield (1987); Koffler (1987); Losack (1987); Marshall (1987); Hembree (1987); Mangino, Battaille, Washington, & Rumbaut (1986); Michigan Department of Education (1984); Ketchie (1984); Serow (1982); Indiana Education Department (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); Down(2) (1979); Wellisch (1978); and Findley (1978).
37 Jay P. Heubert, Robert M. Hauser, Eds.   "There is very little research that specifically addresses the consequences of graduation testing." p.172 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Just some of the relevant pre-2008 studies of the effects of minimum-competency or exit exams and the problems with a single passing score include those of Alvarez, Moreno, & Patrinos (2007); Grodsky & Kalogrides (2006); Audette (2005); Orlich (2003); StandardsWork (2003); Meisels, et al. (2003); Braun (2003); Rosenshine (2003); Tighe, Wang, & Foley (2002); Carnoy & Loeb (2002); Baumert & Demmrich (2001); Rosenblatt & Offer (2001); Phelps (2001); Toenjes, Dworkin, Lorence, & Hill (2000); Wenglinsky (2000); Massachusetts Finance Office (2000); DeMars (2000); Bishop (1999, 2000, 2001, & 2004); Grissmer & Flanagan(1998); Strauss, Bowes, Marks, & Plesko (1998); Frederiksen (1994); Ritchie & Thorkildsen (1994); Chao-Qun & Hui (1993); Potter & Wall (1992); Jacobson (1992); Rodgers, et al. (1991); Morris (1991); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Winfield (1987); Koffler (1987); Losack (1987); Marshall (1987); Hembree (1987); Mangino, Battaille, Washington, & Rumbaut (1986); Michigan Department of Education (1984); Ketchie (1984); Serow (1982); Indiana Education Department (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); Down(2) (1979); Wellisch (1978); and Findley (1978).
38 Jay P. Heubert, Robert M. Hauser, Eds.   "Caterall adds, 'initial boasts and doubts alike regarding the effects of gatekeeping competency testing have met with a paucity of follow-up research." p.172 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Just some of the relevant pre-2008 studies of the effects of minimum-competency or exit exams and the problems with a single passing score include those of Alvarez, Moreno, & Patrinos (2007); Grodsky & Kalogrides (2006); Audette (2005); Orlich (2003); StandardsWork (2003); Meisels, et al. (2003); Braun (2003); Rosenshine (2003); Tighe, Wang, & Foley (2002); Carnoy & Loeb (2002); Baumert & Demmrich (2001); Rosenblatt & Offer (2001); Phelps (2001); Toenjes, Dworkin, Lorence, & Hill (2000); Wenglinsky (2000); Massachusetts Finance Office (2000); DeMars (2000); Bishop (1999, 2000, 2001, & 2004); Grissmer & Flanagan(1998); Strauss, Bowes, Marks, & Plesko (1998); Frederiksen (1994); Ritchie & Thorkildsen (1994); Chao-Qun & Hui (1993); Potter & Wall (1992); Jacobson (1992); Rodgers, et al. (1991); Morris (1991); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Winfield (1987); Koffler (1987); Losack (1987); Marshall (1987); Hembree (1987); Mangino, Battaille, Washington, & Rumbaut (1986); Michigan Department of Education (1984); Ketchie (1984); Serow (1982); Indiana Education Department (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); Down(2) (1979); Wellisch (1978); and Findley (1978).
39 Jay P. Heubert, Robert M. Hauser, Eds.   "in one of the few such studies on this topic (Bishop, 1997) compared the Third International Mathematics and Science Study (TIMSS) test scores of countries with and without rigorous graduation tests. He found that countries with demanding exit exams outperformed other countries at a comparable level of development. He concluded, however that such exams were probably not the most important determinant of achievement levels and that more research was needed." p.173 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978).
40 Jay P. Heubert, Robert M. Hauser, Eds.   "Very little is known about the specific consequences of passing or failing a high school graduation exam." p.176 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978).
41 Jay P. Heubert, Robert M. Hauser, Eds.   "American experience is limited and research is needed to explore their effectiveness. For instance, we do not know how to combine advance notice of high-stakes test requirements, remedial intervention, and opportunity to retake graduation tests." p.180 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978).
42 Jay P. Heubert, Robert M. Hauser, Eds.   "Research is also needed to explore the effects of different kinds of high school credentials on employment and other post-school outcomes." p.180 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation  
43 Jay P. Heubert, Robert M. Hauser, Eds.   "At the same time, solid evaluation research on the most effective remedial approaches is sparse." p.183 Denigrating High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Developmental (i.e., remedial) education researchers have conducted many studies to determine what works best to keep students from failing in their “courses of last resort,” after which there are no alternatives.  Researchers have included Boylan, Roueche, McCabe, Wheeler, Kulik, Bonham, Claxton, Bliss, Schonecker, Chen, Chang, and Kirk.
44 Jay P. Heubert, Robert M. Hauser, Eds.   "There is plainly a need for good research on effective remedial eduation." p.183 Denigrating High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Developmental (i.e., remedial) education researchers have conducted many studies to determine what works best to keep students from failing in their “courses of last resort,” after which there are no alternatives.  Researchers have included Boylan, Roueche, McCabe, Wheeler, Kulik, Bonham, Claxton, Bliss, Schonecker, Chen, Chang, and Kirk.
45 Jay P. Heubert, Robert M. Hauser, Eds.   "However, in most of the nation, much needs to be done before a world-class curriculum and world-class instruction will be in place." p.277 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation  
46 Jay P. Heubert, Robert M. Hauser, Eds.   "The committee sees a strong need for better evidence on the benefits and costs of high-stakes testing." p.281 Denigrating High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation No. See, for example, Phelps, R.P. (2000, Winter). Estimating the cost of systemwide student testing in the United States. Journal of Education Finance, 25(3) 343–380; Danitz, T. (2001, February 27). Special report: States pay $400 million for tests in 2001. Stateline.org. Pew Center for the States; Hoxby, C.M. (2002). The cost of accountability, in W. M Evers & H.J. Walberg (Eds.), School Accountability, Stanford, CA: Hoover Institution Press; U.S. GAO. (1993, January). Student testing: Current extent and expenditures, with cost estimates for a national examination. GAO/PEMD-93-8. Washington, DC: US General Accounting Office; Phelps, R.P. (1998). Benefit-cost analysis of systemwide student testing, Paper presented at the annual meeting of the American Education Finance Association, Mobile, AL.
47 Jay P. Heubert, Robert M. Hauser, Eds.   "Very little is known about the specific consequences of passing or failing a high school graduation exam." p.288 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsack, Weerasinghe, and Bembry
48 Jay P. Heubert, Robert M. Hauser, Eds.   "At present, however, advanced skills are often not well defined and ways of assessing them are not well established." p.289 Denigrating High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. 
49 Jay P. Heubert, Robert M. Hauser, Eds.   "...in many cases, the demands that full participation of these students [i.e., students with disabilities] place on assessment systems are greater than current assessment knowledge and technology can support." p.191 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. 
50 Jay P. Heubert, Robert M. Hauser, Eds.   "...available evidence about the possible effects of graduation tests on learning and on high school dropout is inconclusive (e.g., Kreitzer et al., 1989, Reardon, 1996; Catterall, 1990; Cawthorne, 1990; Bishop, 1997). Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsack, Weerasinghe, and Bembry
51 Jay P. Heubert, Robert M. Hauser, Eds.   "We do not know how to combine advance notice of high-stakes test requirements, remedial intervention, and opportunity to retake graduation tests. Research is also needed to explore the effects of different kinds of high school credentials on employment and other post-school outcomes." p.289 Dismissive High Stakes: Testing for Tracking, Promotion, and Graduation Board on Testing and Assessment, National Research Council, 1999 https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation Ford Foundation The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsack, Weerasinghe, and Bembry
52 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "But the practical nature of our charge and the limits of the evidence available to us have meant that we have also had to draw on the practical experience of committee members and outside experts in crafting our advice. Hence, this report relies heavily on expert advice from the field, in addition to scientific research." p. vii Dismissive Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930).
53 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "we reviewed available evidence from research on assessment, accountability, and standards-based reform. However, we recognized that in many areas the evidentiary base was slim." p.11 Dismissive Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930).
54 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "Standards-based reform is a new idea, and few places have put all the pieces in place, and even fewer have put them in place long enough to enable scholars to observe their effects." p.11 1stness Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930).
55 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "Yet despite the prominence of standards-based reform in the policy debate, there are few examples of districts or states that have put the entire standards-based puzzle together, much less achieved success through it. Some evidence is beginning to gather." p.16 Dismissive Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930).
56 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "In large part, the limited body of evidence in this country reflects the complexity of the concept." p.16 Dismissive Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930).
57 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "Despite the common use of such accommodations, however, there is little research on their effects on the validity of test score information, and most of the research has examined college admission tests and other postsecondary measures, not achievement tests in elementary and secondary schools (National Research Council, 1997a)." p.57 Dismissive Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
58 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "Because of the paucity of research, questions remain about whether test results from assessments using accommodations represent valid and reliable indicators of what students with disabilities know and are able to do (Koretz, 1997)." p.57 Dismissive Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. 
59 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "As with accommodations for students with disabilities, the research on the effects of test accommodations for English-language learners is inconclusive." p.62 Dismissive Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. 
60 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "The small body of research that has examined classrooms in depth suggests that such instructional practices may be rare, even among teachers who say they endorse the changes the standards are intended to foster." p.75 Dismissive Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. 
61 Richard F. Elmore, Robert Rothman, Eds. Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. "Districts' capacity to monitor the conditions of instruction in schools is limited, and there are few examples of districts that have been shown to be effective in analyzing such conditions and using the data to improve instruction. The research base on such efforts is slim, in large part because there are so few examples to study." p.76 Dismissive Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council   Pew Charitable Trusts, Spencer Foundation, William T. Grant Foundation Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. 
62 Hartigan, J. A., & Wigdor, A. K.   "The empirical evidence cited for the standard deviation of worker productivity is quite slight." p.239 Dismissive Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery.  Washington, DC: National Academy Press, 1989 https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the National Research Council funders See, for example, The National Research Council’s Testing Expertise,  https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc
63 Hartigan, J. A., & Wigdor, A. K.   "Some fragmentary confirming evidence that supports this point of view can be found in Hunter et al. (1988)... We regard the Hunter and Schmidt assumption as plausible but note that there is very little evidence about the nature of the relationship of ability to output." p.243 Dismissive Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery.  Washington, DC: National Academy Press, 1989 https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the National Research Council funders See, for example, The National Research Council’s Testing Expertise,  https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc
64 Hartigan, J. A., & Wigdor, A. K.   "It is also important to remember that the most important assumptions of the Hunter-Schmidt models rest on a very slim empirical foundation .... Hunter and Schmidt's economy-wide models are based on simple assumptions for which the empirical evidence is slight." p.245 Dismissive, Denigrating Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery.  Washington, DC: National Academy Press, 1989 https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the National Research Council funders See, for example, The National Research Council’s Testing Expertise,  https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc
65 Hartigan, J. A., & Wigdor, A. K.   "It is important to remember that the most important assumptions of the Hunter-Schmidt models rest on a very slim empirical foundation." p.245 Dismissive, Denigrating Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery.  Washington, DC: National Academy Press, 1989 https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the National Research Council funders See, for example, The National Research Council’s Testing Expertise,  https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc
66 Hartigan, J. A., & Wigdor, A. K.   "Hunter and Schmidt's economy wide models are based on simple assumptions for which the empirical evidence is slight." p.245 Dismissive, Denigrating Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery.  Washington, DC: National Academy Press, 1989 https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the National Research Council funders See, for example, The National Research Council’s Testing Expertise,  https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc
67 Hartigan, J. A., & Wigdor, A. K.   "That assumption is supported by only a very few studies." p.245 Dismissive, Denigrating Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery.  Washington, DC: National Academy Press, 1989 https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the National Research Council funders See, for example, The National Research Council’s Testing Expertise,  https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc
68 Hartigan, J. A., & Wigdor, A. K.   "There is no well-developed body of evidence from which to estimate the aggregate effects of better personnel selection...we have seen no empirical evidence that any of them provide an adequate basis for estimating the aggregate economic effects of implementing the VG-GATB on a nationwide basis." p.247 Dismissive, Denigrating Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery.  Washington, DC: National Academy Press, 1989 https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the National Research Council funders See, for example, The National Research Council’s Testing Expertise,  https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc
69 Hartigan, J. A., & Wigdor, A. K.   "Furthermore, given the state of scientific knowledge, we do not believe that realistic dollar estimates of aggregate gains from improved selection are even possible." p.248 Dismissive Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery.  Washington, DC: National Academy Press, 1989 https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the National Research Council funders See, for example, The National Research Council’s Testing Expertise,  https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc
70 Hartigan, J. A., & Wigdor, A. K.   "...primitive state of knowledge..." p.248 Denigrating Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery.  Washington, DC: National Academy Press, 1989 https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the National Research Council funders See, for example, The National Research Council’s Testing Expertise,  https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc
                   
  IRONIES:                
  Michael J. Feuer   "It is our way of reminding ourselves, and others, that we hold to high evidentiary standards when it comes to programs or policies that affect the lives of people or the workings of organizations." p.98   Past as Prologue: The National Academu of Education at 50 National Academy of Education funders      
  Michael J. Feuer   "Other societies have tried to suppress science when it interferes with politics or religion." p.97   Past as Prologue: The National Academu of Education at 50 National Academy of Education funders      
  Michael J. Feuer   "We invite and pay for an extraordinary amount of certifiably expert input to feed our apparently insatiable appetite for data." p.98   Past as Prologue: The National Academu of Education at 50 National Academy of Education funders      
  Michael J. Feuer   "… one advantage of the Academy depends on keeping evidence ahead of advocacy—even if we are not sure how to define evidence and appreciate the passions that bring people to this work in the first place." p.98   Past as Prologue: The National Academu of Education at 50 National Academy of Education funders      
  Michael J. Feuer   "… NRC report review, which is often the butt of humor because it appears to privilege rigor over relevance;" p.99   Past as Prologue: The National Academu of Education at 50 National Academy of Education funders      
  Michael J. Feuer   "To challenge authority is to hold authority accountable. Challenging people in power requires them to show that what they are doing is legitimate; we invite them to rise to the challenge and prove their case; and they, in turn, trust that the system will treat them fairly."   Measuring Accountability When Trust Is Conditional Education Week, September 24, 2012 https://www.edweek.org/ew/articles/2012/09/24/05feuer_ep.h32.html?print=1    
  Michael J. Feuer   "No profession is granted automatic autonomy or an exemption from evaluation."   Measuring Accountability When Trust Is Conditional Education Week, September 24, 2012 https://www.edweek.org/ew/articles/2012/09/24/05feuer_ep.h32.html?print=1    
                   
      Author cites (and accepts as fact without checking) someone elses dismissive review            
      Cite selves or colleagues in the group, but dismiss or denigrate all other work            
      Falsely claim that research has only recently been done on topic.