HOME:  Dismissive Reviews in Education Policy Research          
  Author Co-author(s) Dismissive Quote type Title Source Funders Link1 Notes Notes2  
1 Peter De Vlieger Brian A. Jacob, Kevin Stange "Yet relatively little is known about the impact of instructor effectiveness on student performance in higher education" Dismissive Measuring up: Assessing instructor effectiveness in higher education Education Next, SUMMER 2017 / VOL. 17, NO. 3 Harvard PEPG and Thomas B. Fordham Institute http://educationnext.org/measuring-up-assessing-instructor-effectiveness-higher-education/ In fact, the research literature on testing in higher education is long and deep. Consider, for example, the work of Trudy Banta, Patricia Cross, and Thomas Angelo. See also the large number of higher education studies in this meta analysis:  https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm    
2 Peter De Vlieger Brian A. Jacob, Kevin Stange "This lack of research is largely the result of data and methodological challenges." Dismissive Measuring up: Assessing instructor effectiveness in higher education Education Next, SUMMER 2017 / VOL. 17, NO. 3 Harvard PEPG and Thomas B. Fordham Institute http://educationnext.org/measuring-up-assessing-instructor-effectiveness-higher-education/ In fact, the research literature on testing in higher education is long and deep. Consider, for example, the work of Trudy Banta, Patricia Cross, and Thomas Angelo. See also the large number of higher education studies in this meta analysis:  https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm    
3 Peter De Vlieger Brian A. Jacob, Kevin Stange "Instructors are a chief input into the higher education production process, yet we know very little about their role in promoting student success." Dismissive Measuring Instructor Effectiveness in Higher Education, abstract "This paper was prepared for the NBER Conference “Productivity in Higher Education” held on June 1, 2016." Published by NBER November 30,2016. NBER funders   In fact, the research literature on testing in higher education is long and deep. Consider, for example, the work of Trudy Banta, Patricia Cross, and Thomas Angelo. See also the large number of higher education studies in this meta analysis:  https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm    
4 Peter De Vlieger Brian A. Jacob, Kevin Stange "Professors and instructors are a chief input into the higher education production process, yet we know very little about their role in promoting student success." Dismissive Measuring Instructor Effectiveness in Higher Education, p.1 "This paper was prepared for the NBER Conference “Productivity in Higher Education” held on June 1, 2016." Published by NBER November 30,2016. NBER funders   In fact, the research literature on testing in higher education is long and deep. Consider, for example, the work of Trudy Banta, Patricia Cross, and Thomas Angelo. See also the large number of higher education studies in this meta analysis:  https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm    
5 Peter De Vlieger Brian A. Jacob, Kevin Stange "Yet relatively little is known about the importance of or correlates of instructor effectiveness in postsecondary education." Dismissive Measuring Instructor Effectiveness in Higher Education, p.4 "This paper was prepared for the NBER Conference “Productivity in Higher Education” held on June 1, 2016." Published by NBER November 30,2016. NBER funders   In fact, the research literature on testing in higher education is long and deep. Consider, for example, the work of Trudy Banta, Patricia Cross, and Thomas Angelo. See also the large number of higher education studies in this meta analysis:  https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm    
6 Peter De Vlieger Brian A. Jacob, Kevin Stange "Yet little is known about instructor effectiveness in postsecondary education, in part due to difficulties with outcome measurement and self-selection." Dismissive Measuring Instructor Effectiveness in Higher Education, p.4 "This paper was prepared for the NBER Conference “Productivity in Higher Education” held on June 1, 2016." Published by NBER November 30,2016. NBER funders   In fact, the research literature on testing in higher education is long and deep. Consider, for example, the work of Trudy Banta, Patricia Cross, and Thomas Angelo. See also the large number of higher education studies in this meta analysis:  https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm    
7 Alia Wong (journalist) Thomas S. Dee, Will Dobbie, Brian A. Jacob, & Jonah Rockoff "The prevalence of test-score manipulation in the United States is well-documented. ... What hasn’t been well documented are the causes and consequences of such manipulation." Dismissive Why Would a Teacher Cheat? Educators often choose to inflate students' scores on standardized tests, and the motivations—and effects—indicate that a little deception isn't always a bad thing.  The Atlantic,  April 27, 2016 NBER funders https://www.theatlantic.com/education/archive/2016/04/why-teachers-cheat/480039/ Actually, there have been, in surveys, in which respondents freely admit that they cheat and how. Moreover, news reports of cheating, by students or educators, have been voluminous. See, for example, Caveon Test Security's "Cheating in the News" section on its web site.  The most famous test score inflation study of all time -- John J. Cannells "Lake Wobegon Effect" study -- is largely about cheating. See:  http://nonpartisaneducation.org/Review/Books/CannellBook1.htm  http://nonpartisaneducation.org/Review/Books/Cannell2.pdf;  See also Gregory J. Cizek's Cheating on Tests: https://www.goodreads.com/book/show/5084641-cheating-on-tests ; and Caveon Test Security's resource pages: https://www.caveon.com/resources/.       
8 Brian A. Jacob Thomas S. Dee, Will Dobbie, Jonah Rockoff "…despite widespread concerns over test validity and the manipulation of scores, we know little about the factors that lead educators to manipulate student scores (e.g., accountability policies versus individual students traits). Dismissive The Causes and Consequences of Test Score Manipulation: Evidence from the New York Regents Examinations, p.1 National Bureau of Economic Research, Working Paper 22165, April 2016 NBER funders http://www.nber.org/papers/w22165 Actually, there have been, in surveys, in which respondents freely admit that they cheat and how. Moreover, news reports of cheating, by students or educators, have been voluminous. See, for example, Caveon Test Security's "Cheating in the News" section on its web site.  The most famous test score inflation study of all time -- John J. Cannells "Lake Wobegon Effect" study -- is largely about cheating. See:  http://nonpartisaneducation.org/Review/Books/CannellBook1.htm  http://nonpartisaneducation.org/Review/Books/Cannell2.pdf;  See also Gregory J. Cizek's Cheating on Tests: https://www.goodreads.com/book/show/5084641-cheating-on-tests ; and Caveon Test Security's resource pages: https://www.caveon.com/resources/.       
9 Brian A. Jacob Thomas S. Dee, Will Dobbie, Jonah Rockoff "…there is little empirical evidence on whether test score manipulation has any long-run consequences for students' educational outcomes and performance gaps by race, ethnicity, and gender." Dismissive The Causes and Consequences of Test Score Manipulation: Evidence from the New York Regents Examinations, p.1 National Bureau of Economic Research, Working Paper 22165, April 2016 NBER funders http://www.nber.org/papers/w22165 Actually, there have been, in surveys, in which respondents freely admit that they cheat and how. Moreover, news reports of cheating, by students or educators, have been voluminous. See, for example, Caveon Test Security's "Cheating in the News" section on its web site.  The most famous test score inflation study of all time -- John J. Cannells "Lake Wobegon Effect" study -- is largely about cheating. See:  http://nonpartisaneducation.org/Review/Books/CannellBook1.htm  http://nonpartisaneducation.org/Review/Books/Cannell2.pdf;  See also Gregory J. Cizek's Cheating on Tests: https://www.goodreads.com/book/show/5084641-cheating-on-tests ; and Caveon Test Security's resource pages: https://www.caveon.com/resources/.       
10 Brian A. Jacob Thomas S. Dee, Will Dobbie, Jonah Rockoff "Our results contribute to an emerging literature that documents both the moral hazard that can be created by test-scoring procedures…. In early work, Jacob and Levitt (2003) find... Dismissive The Causes and Consequences of Test Score Manipulation: Evidence from the New York Regents Examinations, p.3 National Bureau of Economic Research, Working Paper 22165, April 2016 NBER funders http://www.nber.org/papers/w22165 Actually, there have been, in surveys, in which respondents freely admit that they cheat and how. Moreover, news reports of cheating, by students or educators, have been voluminous. See, for example, Caveon Test Security's "Cheating in the News" section on its web site.  The most famous test score inflation study of all time -- John J. Cannells "Lake Wobegon Effect" study -- is largely about cheating. See:  http://nonpartisaneducation.org/Review/Books/CannellBook1.htm  http://nonpartisaneducation.org/Review/Books/Cannell2.pdf;  See also Gregory J. Cizek's Cheating on Tests: https://www.goodreads.com/book/show/5084641-cheating-on-tests ; and Caveon Test Security's resource pages: https://www.caveon.com/resources/.       
11 Brian A. Jacob Thomas S. Dee, Will Dobbie, Jonah Rockoff "Our results contribute to an emerging literature that documents both the moral hazard that can be created by test-scoring procedures…. In early work, Jacob and Levitt (2003) find... Dismissive The Causes and Consequences of Test Score Manipulation: Evidence from the New York Regents Examinations, p.3 National Bureau of Economic Research, Working Paper 22165, April 2016 NBER funders http://www.nber.org/papers/w22165 Actually, there have been, in surveys, in which respondents freely admit that they cheat and how. Moreover, news reports of cheating, by students or educators, have been voluminous. See, for example, Caveon Test Security's "Cheating in the News" section on its web site.  The most famous test score inflation study of all time -- John J. Cannells "Lake Wobegon Effect" study -- is largely about cheating. See:  http://nonpartisaneducation.org/Review/Books/CannellBook1.htm  http://nonpartisaneducation.org/Review/Books/Cannell2.pdf;  See also Gregory J. Cizek's Cheating on Tests: https://www.goodreads.com/book/show/5084641-cheating-on-tests ; and Caveon Test Security's resource pages: https://www.caveon.com/resources/.       
12 Brian A. Jacob Jonah Rockoff, Eric Taylor, Ben Lindy, Rachel Rosen  "...despite many decades of research, little progress has been made in establishing rigorous methods to select individuals likely to become successful teachers. ... More recent research has shown some promising results ... (Rockoff et al. 2011) ... (Boyd et al. 2008). Only one concurrent study (Goldhaber et al. 2014) examines the extent to which teacher performance can be predicted using data collected as part of an actual hiring process.",  Denigrating Teacher Applicant Hiring and Teacher Performance: Evidence from DC Public Schools  New York Federal Reserve Bank   https://www.newyorkfed.org/medialibrary/media/research/education_seminar_series/jrtlr_teach_dc_23_feb_2015.pdf The authors should have looked in the Industrial/Organizational Psychology (i.e., Personnel Psychology), for example Hunter & Schmidt's meta-analyses of the use of test instruments in personnel selection.    
13 Brian A. Jacob Jonah Rockoff, Eric Taylor, Ben Lindy, Rachel Rosen  "Selecting more effective teachers among job applicants during the hiring process could be a highly cost-effective means of improving educational quality, but there is little research that links information gathered during the hiring process to subsequent teacher performance.", Abstract Dismissive Teacher Applicant Hiring and Teacher Performance: Evidence from DC Public Schools  New York Federal Reserve Bank   https://www.newyorkfed.org/medialibrary/media/research/education_seminar_series/jrtlr_teach_dc_23_feb_2015.pdf The authors should have looked in the Industrial/Organizational Psychology (i.e., Personnel Psychology), for example Hunter & Schmidt's meta-analyses of the use of test instruments in personnel selection.    
14 Brian A. Jacob   “And, yet, there is little empirical evidence on whether such incentives will change teacher behavior or improve student achievement.” p. 2 Dismissive The Effect of Employment Protection on Teacher Effort University of Michigan & NBER, March 2012 NBER funders http://cep.lse.ac.uk/seminarpapers/07-05-13-BJ.pdf The authors should have looked in the Industrial/Organizational Psychology (i.e., Personnel Psychology), for example Hunter & Schmidt's meta-analyses of the use of test instruments in personnel selection.    
15 Brian A. Jacob   “In addition, this analysis contributes to the economic literature on employment protection more generally. To the best of my knowledge, it is one of the few empirical studies of the impact of employment protection on worker effort...” p. 4 1stness The Effect of Employment Protection on Teacher Effort University of Michigan & NBER, March 2012 NBER funders http://cep.lse.ac.uk/seminarpapers/07-05-13-BJ.pdf In response to similar claims made by one of Jacob's colleagues, the teacher union expert Myron Lieberman showcased bibliographies on the topic with the number of references exceeding a thousand.    
16 Brian A. Jacob   “ . . . the only study to directly examine this issue in the public sector.” p. 4 1stness The Effect of Employment Protection on Teacher Effort University of Michigan & NBER, March 2012 NBER funders http://cep.lse.ac.uk/seminarpapers/07-05-13-BJ.pdf In response to similar claims made by one of Jacob's colleagues, the teacher union expert Myron Lieberman showcased bibliographies on the topic with the number of references exceeding a thousand.    
17 Brian A. Jacob   “Surprisingly few studies have examined the impact of employment protection on worker behavior.” p. 5 Dismissive The Effect of Employment Protection on Teacher Effort University of Michigan & NBER, March 2012 NBER funders http://cep.lse.ac.uk/seminarpapers/07-05-13-BJ.pdf In response to similar claims made by one of Jacob's colleagues, the teacher union expert Myron Lieberman showcased bibliographies on the topic with the number of references exceeding a thousand.    
18 Brian A. Jacob   “Two recent reviews of pay-for-performance in education conclude that the existing evidence on merit pay is limited and shows mixed results (Springer and Podgursky 2008, Lavy 2008).” p. 6 Dismissive The Effect of Employment Protection on Teacher Effort University of Michigan & NBER, March 2012 NBER funders http://cep.lse.ac.uk/seminarpapers/07-05-13-BJ.pdf Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); Hurlock (1925), and Zeng (2001). *Covers many studies; study is a research review, research synthesis, or meta-analysis.  Other researchers who, even prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones.  
19 Brian A. Jacob   "On this topic, debate has been vigorous but research almost nil,…" Dismissive Principled principals: New evidence from Chicago shows they fire the least effective teachers Education Next, Fall 2011 / Vol. 11, No. 4 Harvard PEPG and Thomas B. Fordham Institute http://educationnext.org/principled-principals/ Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); Hurlock (1925), and Zeng (2001). *Covers many studies; study is a research review, research synthesis, or meta-analysis.  Other researchers who, even prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones.  
20 Thomas S. Dee Brian A. Jacob "Though the reauthorization of NCLB is currently under consideration, the empirical evidence on the imnpact of NCLB on student achievement is, to date, extremely limited." Dismissive The Impact of No Child Left Behind on Student Achievement, p.419 Journal of Public Policy Analysis & Management, 30(3), 418–446 (2011)          
21 Thomas S. Dee Brian A. Jacob "In a recent review of this diverse evaluation literature, Figlio and Ladd (2008) suggest that three studies (Carnoy & Loeb, 2002; Jacob, 2005; Hanushek & Raymond, 2005) are the “most methodologically sound” (Ladd, 2007)." Denigrating The Impact of No Child Left Behind on Student Achievement, p.420–421 Journal of Public Policy Analysis & Management, 30(3), 418–446 (2011)     Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); Hurlock (1925), and Zeng (2001). *Covers many studies; study is a research review, research synthesis, or meta-analysis.  Other researchers who, even prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones.  
22 Brian A. Jacob Thomas S. Dee “However, there is surprisingly little research on the relationship between school accountability and spending, despite an extensive literature on education finance more generally.” p. 175 Dismissive The Impact of No Child Left Behind on Students, Teachers, and Schools Brookings Papers on Economic Activity, Fall 2010  Brookings Institution Funders http://www.brookings.edu/~/media/Projects/BPEA/Fall%202010/2010b_bpea_dee.PDF      
23 Brian A. Jacob Thomas S. Dee Few studies have implemented regression-based research designs that attempt to isolate the effects of school accountability policies on district, school, and classroom practices from the potentially confounding effects of other determinants.” p. 181 Denigrating The Impact of No Child Left Behind on Students, Teachers, and Schools Brookings Papers on Economic Activity, Fall 2010  Brookings Institution Funders http://www.brookings.edu/~/media/Projects/BPEA/Fall%202010/2010b_bpea_dee.PDF In fact, a very large number of studies do so. See, for example, https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
24 Brian A. Jacob   “In contrast, there has been remarkably little research on the demand side of the teacher labor market. For example, few studies have examined how principals hire or fire teachers, or how changes in personnel policies might influence teacher quality.” p. 2 Dismissive Do Principals Fire the Worst Teachers? NBER Working Paper No. 15715, February 2010  NBER funders http://www-personal.umich.edu/~baBrian A. Jacob/w15715_teacher_firing.pdf The authors should have looked in the Industrial/Organizational Psychology (i.e., Personnel Psychology), for example Hunter & Schmidt's meta-analyses of the use of test instruments in personnel selection.    
25 Brian A. Jacob   This void is not unique to education research. There is a vast economics literature on employee compensation, for example, but relatively few empirical studies that examine the factors that employers consider when hiring or dismissing workers.” p. 3 Dismissive Do Principals Fire the Worst Teachers? NBER Working Paper No. 15715, February 2010  NBER funders http://www-personal.umich.edu/~baBrian A. Jacob/w15715_teacher_firing.pdf The authors should have looked in the Industrial/Organizational Psychology (i.e., Personnel Psychology), for example Hunter & Schmidt's meta-analyses of the use of test instruments in personnel selection.    
26 Brian A. Jacob   “While many studies mention the determinants of job displacement, few studies attempt to carefully explore employer preferences for worker characteristics. One important exception is the literature on employer discrimination.” p. 6 Dismissive Do Principals Fire the Worst Teachers? NBER Working Paper No. 15715, February 2010  NBER funders http://www-personal.umich.edu/~baBrian A. Jacob/w15715_teacher_firing.pdf The authors should have looked in the Industrial/Organizational Psychology (i.e., Personnel Psychology), for example Hunter & Schmidt's meta-analyses of the use of test instruments in personnel selection.    
27 Brian A. Jacob   Specifically, I document the extent to which student performance trends on state assessments differ from those on the National Assessment of Educational Progress (NAEP). While such divergence has been documented in several studies ... there has been no systematic analysis of this issue nationwide..., p.2 Dismissive Test-based accountability and student achievement: An investigation of differential performance on NAEP and state assessment CLOSUP Working Paper Series, Number 17, February 2009, U. Michigan Funding for this project was generously provided by the U.S. Department of Education NAEP Secondary Analysis Grant (#R902B030024). http://closup.umich.edu In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000),  Stancavage, Et al (2002),  Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004).     
28 Brian A. Jacob   To fill this gap in the research literature, …, p.3 Dismissive Test-based accountability and student achievement: An investigation of differential performance on NAEP and state assessment CLOSUP Working Paper Series, Number 17, February 2009, U. Michigan Funding for this project was generously provided by the U.S. Department of Education NAEP Secondary Analysis Grant (#R902B030024). http://closup.umich.edu In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000),  Stancavage, Et al (2002),  Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004).     
29 Brian A. Jacob   More importantly, there has been little research on the reasons why student performance differs between NAEP and local assessments, p.11 Dismissive Test-based accountability and student achievement: An investigation of differential performance on NAEP and state assessment CLOSUP Working Paper Series, Number 17, February 2009, U. Michigan Funding for this project was generously provided by the U.S. Department of Education NAEP Secondary Analysis Grant (#R902B030024). http://closup.umich.edu In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000),  Stancavage, Et al (2002),  Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004).     
30 Brian A. Jacob   However, this work does not explicitly examine the issue of test score inflation. Jacob (2005) makes an effort to fill this gap. P.12 Dismissive Test-based accountability and student achievement: An investigation of differential performance on NAEP and state assessment CLOSUP Working Paper Series, Number 17, February 2009, U. Michigan Funding for this project was generously provided by the U.S. Department of Education NAEP Secondary Analysis Grant (#R902B030024). http://closup.umich.edu In fact the test prep, or test coaching, literature is vast and dates back decades, with meta-analyses of the literature dating back at least to the 1970s. There's even a What Works Clearinghouse summary of the (post World Wide Web) college admission test prep research literature:  https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_act_sat_100416.pdf . See also: Messick & Jungeblut (1981)  Ellis, Konoske, Wulfeck, & Montague (1982)  DerSimonian and Laird (1983)  Kulik, Bangert-Drowns & Kulik (1984) Fraker (1986/1987) Halpin (1987) Whitla (1988)  Snedecor (1989)  Becker (1990)  Smyth (1990) Moore (1991)  Alderson & Wall (1992)  Powers (1993)  Powers & Rock (1994)  Scholes, Lane (1997)   Allalouf & Ben Shakhar (1998)  Robb & Ercanbrack (1999)  McClain (1999)  Camara (1999, 2001, 2008) Stone & Lane (2000, 2003)  Din & Soldan (2001)  Briggs (2001)  Palmer (2002)  Briggs & Hansen (2004)  Cankoy & Ali Tut (2005)  Crocker (2005)  Allensworth, Correa, & Ponisciak (2008)  Domingue & Briggs (2009)   Early (2019)    
31 Brian A. Jacob Jonah Rockoff, Thomas J. Kane, Douglas O. Staiger “Research on the relationship between teachers' characteristics and teacher effectiveness has been underway for over a century, yet little progress has been made in linking teacher quality with factors observable at the time of hire.” p. 1 Dismissive Can You Recognize an Effective Teacher When You Recruit One? NBER Working Paper 14485, November 2008 We are grateful to the Spencer Foundation and the Carnegie Corporation for generous
financial support.
http://www.dartmouth.edu/~dstaiger/Papers/w14485.pdf Just like there exists a massive literature on "effective schools" there exists a subset on "effective teachers."    
32 Brian A. Jacob Jonah Rockoff, Thomas J. Kane, Douglas O. Staiger “However, most research on teacher effectiveness has examined a relatively small set of teacher characteristics, such as graduate education and certification . . . researchers’ lack of success in predicting new teacher performance may be driven by a narrow focus on commonly available data.” p. 1 Denigrating Can You Recognize an Effective Teacher When You Recruit One? NBER Working Paper 14485, November 2008 We are grateful to the Spencer Foundation and the Carnegie Corporation for generous
financial support.
http://www.dartmouth.edu/~dstaiger/Papers/w14485.pdf Just like there exists a massive literature on "effective schools" there exists a subset on "effective teachers."    
33 Brian A. Jacob Jonah Rockoff, Thomas J. Kane, Douglas O. Staiger “While many studies have been conducted, few definitive conclusions have been made. One reason has been the widespread but controversial use of the Minnesota Multiphasic Personality Inventory. …” p. 8 Dismissive Can You Recognize an Effective Teacher When You Recruit One? NBER Working Paper 14485, November 2008 We are grateful to the Spencer Foundation and the Carnegie Corporation for generous
financial support.
http://www.dartmouth.edu/~dstaiger/Papers/w14485.pdf Just like there exists a massive literature on "effective schools" there exists a subset on "effective teachers."    
34 Brian A. Jacob Jonah Rockoff, Thomas J. Kane, Douglas O. Staiger “However, there is little work examining the relationship between self-efficacy and student learning.” p. 9 Dismissive Can You Recognize an Effective Teacher When You Recruit One? NBER Working Paper 14485, November 2008 We are grateful to the Spencer Foundation and the Carnegie Corporation for generous
financial support.
http://www.dartmouth.edu/~dstaiger/Papers/w14485.pdf Just like there exists a massive literature on "effective schools" there exists a subset on "effective teachers."    
35 Brian A. Jacob Jonah Rockoff, Thomas J. Kane, Douglas O. Staiger “In addition to being one of the first studies of teacher value-added and its correlation with principal evaluations, this paper also finds a significant positive relationship between teachers’ sense of self-efficacy and student achievement growth.” p. 10 1stness Can You Recognize an Effective Teacher When You Recruit One? NBER Working Paper 14485, November 2008 We are grateful to the Spencer Foundation and the Carnegie Corporation for generous
financial support.
http://www.dartmouth.edu/~dstaiger/Papers/w14485.pdf Tennessee's TVAAS value-added measurement system had been running a decade when they wrote this and did much of what these authors claim had never been done.    
36 Brian A. Jacob Jonah Rockoff, Thomas J. Kane, Douglas O. Staiger “While use of commercial selection instruments has grown considerably, there is little systematic evidence on the power of these instruments for predicting teacher effectiveness.” p. 11 Dismissive Can You Recognize an Effective Teacher When You Recruit One? NBER Working Paper 14485, November 2008 We are grateful to the Spencer Foundation and the Carnegie Corporation for generous
financial support.
http://www.dartmouth.edu/~dstaiger/Papers/w14485.pdf The authors should have looked in the Industrial/Organizational Psychology (i.e., Personnel Psychology), for example Hunter & Schmidt's meta-analyses of the use of test instruments in personnel selection.    
37 Brian A. Jacob Lars Lefgren “The few studies that examine the correlation between principal evaluations and other measures of teacher performance, such as parent or student satisfaction, find similarly weak relationships (Peterson 1987, 2000).” p. 6, note 4 Dismissive Can Principals Identify Effective Teachers? Evidence on Subjective Performance Evaluation in Education June 2007, [eventually published under this title in JEL Vol. 26.2008, 1, p. 101-136]    https://economics.byu.edu/Documents/Lars%20Lefgren/papers/principals.pdf Just like there exists a massive literature on "effective schools" there exists a subset on "effective teachers."    
38 Brian A. Jacob   "While such divergence has been documented in several studies, there has been no systematic analysis of this issue nationwide." Dismissive TEST-BASED ACCOUNTABILITY AND STUDENT ACHIEVEMENT: AN INVESTIGATION OF DIFFERENTIAL PERFORMANCE ON NAEP AND STATE ASSESSMENTS, p.2 NBER Working Paper 12817 Funding for this project was generously provided by the U.S. Department of Education NAEP Secondary Analysis Grant (#R902B030024). http://www.nber.org/papers/w12817 In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000),  Stancavage, Et al (2002),  Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004).     
39 Brian A. Jacob   "To fill this gap in the research literature, over a period of several years I collected…" Dismissive TEST-BASED ACCOUNTABILITY AND STUDENT ACHIEVEMENT: AN INVESTIGATION OF DIFFERENTIAL PERFORMANCE ON NAEP AND STATE ASSESSMENTS, p.3 NBER Working Paper 12817 Funding for this project was generously provided by the U.S. Department of Education NAEP Secondary Analysis Grant (#R902B030024). http://www.nber.org/papers/w12817 In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000),  Stancavage, Et al (2002),  Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004).     
40 Brian A. Jacob   "More importantly, there has been little research on the reasons why student performance differs between NAEP and local assessments." Dismissive TEST-BASED ACCOUNTABILITY AND STUDENT ACHIEVEMENT: AN INVESTIGATION OF DIFFERENTIAL PERFORMANCE ON NAEP AND STATE ASSESSMENTS, p.11 NBER Working Paper 12817 Funding for this project was generously provided by the U.S. Department of Education NAEP Secondary Analysis Grant (#R902B030024). http://www.nber.org/papers/w12817 In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000),  Stancavage, Et al (2002),  Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004).     
41 Brian A. Jacob   "Jacob (2005) makes an effort to fill this gap. He found…" Dismissive TEST-BASED ACCOUNTABILITY AND STUDENT ACHIEVEMENT: AN INVESTIGATION OF DIFFERENTIAL PERFORMANCE ON NAEP AND STATE ASSESSMENTS, p.12 NBER Working Paper 12817 Funding for this project was generously provided by the U.S. Department of Education NAEP Secondary Analysis Grant (#R902B030024). http://www.nber.org/papers/w12817 In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000),  Stancavage, Et al (2002),  Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004).     
42 Lars Lefgren Brian A. Jacob "While principals can and do judge teachers’ performance, however, there is little good evidence on the accuracy of their judgments. The research reported in this paper fills this gap." Dismissive, 1stness When principals rate teachers: the best--and the worst--stand out Education Next, Spring 2006 / Vol. 6, No. 2 Harvard PEPG and Thomas B. Fordham Institute http://educationnext.org/whenprincipalsrateteachers/ The authors should have looked in the Industrial/Organizational Psychology (i.e., Personnel Psychology), for example Hunter & Schmidt's meta-analyses of the use of test instruments in personnel selection.    
43 Brian A. Jacob Lars Lefgren “An important feature of most Section 8 programs including Gautreaux and MTO is that they involve voluntary relocation. Only a few studies examine forced relocation.” p. 1 Dismissive Principals as Agents: Subjective Performance Measurement in Education Kennedy School of Government Faculty Research Working Paper Series, RWP05-040, June 2005  Harvard Kennedy School http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.9537&rep=rep1&type=pdf      
44 Brian A. Jacob Lars Lefgren “The paper also speaks to the broader literature on subjective performance assessment.  While such evaluations are central to promotion, retention and compensation decisions in most industries, they have received relatively little attention in the economics literature (Prendergast 1999).” p. 5 Dismissive Principals as Agents: Subjective Performance Measurement in Education Kennedy School of Government Faculty Research Working Paper Series, RWP05-040, June 2005  Harvard Kennedy School http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.9537&rep=rep1&type=pdf      
45 Brian A. Jacob Lars Lefgren “The few studies that examine the correlation between principal evaluations and other measures of teacher performance, such as parent or student satisfaction, find similarly weak relationships (Peterson 1987, 2000).” p. 6, note 8 Dismissive Principals as Agents: Subjective Performance Measurement in Education Kennedy School of Government Faculty Research Working Paper Series, RWP05-040, June 2005  Harvard Kennedy School http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.9537&rep=rep1&type=pdf The authors should have looked in the Industrial/Organizational Psychology (i.e., Personnel Psychology), for example Hunter & Schmidt's meta-analyses of the use of test instruments in personnel selection.    
46 Brian A. Jacob   "The recent federal education bill, No Child Left Behind, requires states to test students in grades three to eight each year, and to judge school performance on the basis of these test scores. While intended to maximize student learning, there is little empirical evidence about the effectiveness of such policies." Dismissive Accountability, Incentives and Behavior: The Impact of High-Stakes Testing in the Chicago Public Schools, Abstract, 2002. Journal of Public Economics, Volume 89, Issues 5Ð6, June 2005, Pages 761-796. Funding for this research was provided by the Spencer Foundation http://www.nber.org/papers/w8968.pdf

http://www.sciencedirect.com/science/article/pii/S0047272704001549
See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
47 Brian A. Jacob   "Despite its increasing popularity within education, there is little empirical evidence on test-based accountability (also referred to as high-stakes testing)." Dismissive Accountability, Incentives and Behavior: The Impact of High-Stakes Testing in the Chicago Public Schools, p. 2, 2002. Journal of Public Economics, Volume 89, Issues 5Ð6, June 2005, Pages 761-796. Funding for this research was provided by the Spencer Foundation http://www.nber.org/papers/w8968.pdf

http://www.sciencedirect.com/science/article/pii/S0047272704001549
See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
48 Brian A. Jacob   "...most studies of school-based accountability do not utilize individual students data and thus cannot examine many outcomes of interest or investigate how effects vary across students." Denigrating Accountability, Incentives and Behavior: The Impact of High-Stakes Testing in the Chicago Public Schools, p. 2, 2002. Journal of Public Economics, Volume 89, Issues 5Ð6, June 2005, Pages 761-796. Funding for this research was provided by the Spencer Foundation http://www.nber.org/papers/w8968.pdf

http://www.sciencedirect.com/science/article/pii/S0047272704001549
See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
49 Brian A. Jacob   "...the federal government has moved to ensure a minimal level of testing and reporting that only a decade ago would have been unthinkable." Dismissive Accountability, Incentives and Behavior: The Impact of High-Stakes Testing in the Chicago Public Schools, p. 2, 2002. Journal of Public Economics, Volume 89, Issues 5Ð6, June 2005, Pages 761-796. Funding for this research was provided by the Spencer Foundation http://www.nber.org/papers/w8968.pdf

http://www.sciencedirect.com/science/article/pii/S0047272704001549
See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
50 Brian A. Jacob Lars Lefgren "As standards and accountability have become increasingly prominent features of the educational landscape, educators have relied more on remedial programs such as summer school and grade retention to help low-achieving students meet minimum academic standards. Yet the evidence on the effectiveness of such programs is mixed, and prior research suffers from selection bias." Denigrating Remedial Education and Student Achievement: A Regression-Discontinuity Analysis The Review of Economics and Statistics, February 2004, Vol. 86, No. 1, Pages: 226-244   http://www.mitpressjournals.org/doi/abs/10.1162/003465304323023778#.WRSzmcm1vHF For example, from Table 2, Chapter 3 of Correcting Fallacies, "Developmental (i.e., remedial) education researchers have conducted many studies to determine what works best to keep students from failing in their “courses of last resort,” after which there are no alternatives.  Researchers have included Boylan, Roueche, McCabe, Wheeler, Kulik, Bonham, Claxton, Bliss, Schonecker, Chen, Chang, and Kirk."    
51 Steven D. Levitt Brian A. Jacob "These scandals have aroused public concern, but there has been little hard evidence on the extent of cheating by school personnel on the type of tests required by recently enacted accountability legislation." Dismissive To catch a cheat Education Next, Winter 2004 / Vol. 4, No. 1 Harvard PEPG and Thomas B. Fordham Institute http://educationnext.org/tocatchacheat/ See, for example, https://nonpartisaneducation.org/Review/Articles/v6n3.htm ; https://nonpartisaneducation.org/Review/Books/CannellBook1.htm    
52 Brian A. Jacob Steven D. Levitt “There has been very little previous empirical analysis of teacher cheating. ...Our paper represents the first systematic attempt to (1) identify the overall prevalence of teacher cheating empirically and (2) analyze the factors that predict cheating.” p. 845 1stness Rotten Apples: An Investigation of the Prevalence and Predictors of Teacher Cheating Quarterly Journal of Economics, August 2003  Financial support was provided by the National Science Foundation and the Sloan Foundation. http://pricetheory.uchicago.edu/levitt/Papers/Brian A. JacobLevitt2003.pdf See, for example, https://nonpartisaneducation.org/Review/Articles/v6n3.htm ; https://nonpartisaneducation.org/Review/Books/CannellBook1.htm    
53 Brian A. Jacob Steven D. Levitt “Finally, this paper fits into a small but growing body of research focused on identifying corrupt or illicit behavior on the part of economic actors (see Porter and Zona [1993], Fisman [2001], Di Tella and Schargrodsky [2001], and Duggan and Levitt [2002]).” p. 871 Dismissive Rotten Apples: An Investigation of the Prevalence and Predictors of Teacher Cheating Quarterly Journal of Economics, August 2003  Financial support was provided by the National Science Foundation and the Sloan Foundation. http://pricetheory.uchicago.edu/levitt/Papers/Brian A. JacobLevitt2003.pdf See, for example, https://nonpartisaneducation.org/Review/Articles/v6n3.htm ; https://nonpartisaneducation.org/Review/Books/CannellBook1.htm    
54 Brian A. Jacob   “Despite this shift, there is relatively little evidence on the impact of public housing or housing vouchers on educational outcomes.” p. 1 Dismissive Public Housing, Housing Vouchers and Student Achievement: Evidence from Public Housing Demolitions in Chicago  Working Paper 9652, April 2003   http://core.ac.uk/download/pdf/6707824.pdf      
55 Brian A. Jacob   “An important feature of most Section 8 programs including Gautreaux and MTO is that they involve voluntary relocation. Only a few studies examine forced relocation.”, p.1 Dismissive Public Housing, Housing Vouchers and Student Achievement: Evidence from Public Housing Demolitions in Chicago  Working Paper 9652, April 2003   http://core.ac.uk/download/pdf/6707824.pdf      
56 Brian A. Jacob   "Chicago’s experience with accountability provides  some  lessons  for  other  districts and states as they begin to implement  the  mandates  of  No  Child  Left Behind." Dismissive High Stakes in Chicago Education Next, v.1., p.66, 2003. Harvard PEPG and Thomas B. Fordham Institute http://educationnext.org/highstakesinchicago/ https://nonpartisaneducation.org/Foundation/ThinkTankThoughtlessness.htm ; per usual for such studies, there were no controls for any variation in test administration procedures    
57 Brian A. Jacob   "As the first large urban school district to introduce a comprehensive accountability system, [our city] provides an exceptional case study of the effects of high-stakes testing--a reform strategy that will become omnipresent as the No Child Left Behind Act is implemented nationwide." 1stness High Stakes in Chicago Education Next, v.1., p.66, 2003. Harvard PEPG and Thomas B. Fordham Institute http://educationnext.org/highstakesinchicago/ https://nonpartisaneducation.org/Foundation/ThinkTankThoughtlessness.htm ; per usual for such studies, there were no controls for any variation in test administration procedures    
58 Brian A. Jacob Melissa Roderick, Anthony Bryck "There has been little investigation of whether the purported benefits of these policies (of standardized grade promotion testing)-in the form of increased achievement on standardized tests-actually occur. ...We know very little about whether the introduction of high-stakes testing, particularly when combined with extra resources and with school accountability measures, will increase achievement on standardized tests for all students prior to the promotional gate (both those who are promoted as well as those who may later be retained)." Dismissive The impact of high-stakes testing in Chicago on student achievement in the promotional gate grades. Educational Evaluation and Policy Analysis, 24(4):333-57, 2002.     https://nonpartisaneducation.org/Foundation/ThinkTankThoughtlessness.htm ; per usual for such studies, there were no controls for any variation in test administration procedures    
59 Brian A. Jacob Melissa Roderick, Anthony Bryck "In 1996, [our city's schools] became one of the first large, urban school districts to implement high-stakes testing, introducing a comprehensive accountability program that incorporated incentives for both students and teachers." 1stness The impact of high-stakes testing in Chicago on student achievement in the promotional gate grades. Educational Evaluation and Policy Analysis, 24(4):333-57, 2002.     https://nonpartisaneducation.org/Foundation/ThinkTankThoughtlessness.htm ; per usual for such studies, there were no controls for any variation in test administration procedures    
60 Brian A. Jacob Melissa Roderick, Anthony Bryck "In 1996, [our city] began a national trend when it coupled a new school-level accountability program with an accountability initiative with high-stakes consequences for students. ...Over the past five years, virtually every major school system and many states...have instituted elements of [our city's] policy." 1stness The impact of high-stakes testing in Chicago on student achievement in the promotional gate grades. Educational Evaluation and Policy Analysis, 24(4):333-57, 2002.     https://nonpartisaneducation.org/Foundation/ThinkTankThoughtlessness.htm ; per usual for such studies, there were no controls for any variation in test administration procedures    
61 Brian A. Jacob Lars Lefgren "As standards and accountability have become an increasingly prominent feature of the educational landscape, educators have relied more on remedial programs such as summer school and grade retention to help low-achieving students meet minimum academic standards. Yet the evidence on the effectiveness of such programs is mixed, and prior research suffers from selection bias." Denigrating REMEDIAL EDUCATION AND STUDENT ACHIEVEMENT: A REGRESSION-DISCONTINUITY ANALYSIS, abstract NBER WORKING PAPER SERIES, Working Paper 8918   http://www.nber.org/papers/w8918 For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of testing on at-risk students, completion, dropping out, curricular offerings, attitudes, etc. include those of Schleisman (1999); the *Southern Regional Education Board (1998); Webster, Mendro, Orsak, Weerasinghe & Bembry (1997); Jones (1996); Boylan (1996); Jones, 1993; Jacobson (1992); Grisay (1991); Johnstone (1990); Task Force on Educational Assessment Programs [Florida] (1979); Wellisch, MacQueen, Carriere & Duck (1978); Enochs (1978); Pronaratna (1976); and McWilliams & Thomas (1976)."    
62 Brian A. Jacob Lars Lefgren "Aware of the importance of education, economists have spent considerable effort examining what factors affect academic achievement. There is a large literature on the importance of financial resources in determining educational outcomes. However, researchers have paid considerably less attention to remedial programs designed to improve the performance of low achieving students, including summer school and grade retention (Eide and Showalter forthcoming)." Dismissive REMEDIAL EDUCATION AND STUDENT ACHIEVEMENT: A REGRESSION-DISCONTINUITY ANALYSIS, p.1 NBER WORKING PAPER SERIES, Working Paper 8918 NBER funders http://www.nber.org/papers/w8918 For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of testing on at-risk students, completion, dropping out, curricular offerings, attitudes, etc. include those of Schleisman (1999); the *Southern Regional Education Board (1998); Webster, Mendro, Orsak, Weerasinghe & Bembry (1997); Jones (1996); Boylan (1996); Jones, 1993; Jacobson (1992); Grisay (1991); Johnstone (1990); Task Force on Educational Assessment Programs [Florida] (1979); Wellisch, MacQueen, Carriere & Duck (1978); Enochs (1978); Pronaratna (1976); and McWilliams & Thomas (1976)."    
63 Brian A. Jacob   "There is less evidence on whether, and to what extent, accountability programs lead to test score inflation." p.2 Dismissive "Test-Based Accountability and Student Achievement Gains: Theory and Evidence" Taking Account of Accountability: Assessing Politics and Policy, John F. Kennedy School of Government. Harvard University, June 10 - 11, 2002     In fact the test prep, or test coaching, literature is vast and dates back decades, with meta-analyses of the literature dating back at least to the 1970s. There's even a What Works Clearinghouse summary of the (post World Wide Web) college admission test prep research literature:  https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_act_sat_100416.pdf . See also: Messick & Jungeblut (1981)  Ellis, Konoske, Wulfeck, & Montague (1982)  DerSimonian and Laird (1983)  Kulik, Bangert-Drowns & Kulik (1984) Fraker (1986/1987) Halpin (1987) Whitla (1988)  Snedecor (1989)  Becker (1990)  Smyth (1990) Moore (1991)  Alderson & Wall (1992)  Powers (1993)  Powers & Rock (1994)  Scholes, Lane (1997)   Allalouf & Ben Shakhar (1998)  Robb & Ercanbrack (1999)  McClain (1999)  Camara (1999, 2001, 2008) Stone & Lane (2000, 2003)  Din & Soldan (2001)  Briggs (2001)  Palmer (2002)  Briggs & Hansen (2004)  Cankoy & Ali Tut (2005)  Crocker (2005)  Allensworth, Correa, & Ponisciak (2008)  Domingue & Briggs (2009)   Early (2019)    
64 Brian A. Jacob   "The Chicago Public Schools (ChiPS) was one of the first large, urban school districts to implement high-stakes testing. In 1996-97," p.6 1stness "Test-Based Accountability and Student Achievement Gains: Theory and Evidence" Taking Account of Accountability: Assessing Politics and Policy, John F. Kennedy School of Government. Harvard University, June 10 - 11, 2002 Harvard Kennedy School   https://nonpartisaneducation.org/Foundation/ThinkTankThoughtlessness.htm ; per usual for such studies, there were no controls for any variation in test administration procedures. There have been "large, urban school districts" with high-stakes testing since at least the 1940s.    
65 Brian A. Jacob   “Nearly 20 years later, the debate surrounding MCT [minimum competency tests] remains much the same, consisting primarily of opinion and speculation.... A lack of solid empirical research has allowed the controversy to continue unchecked by evidence or experience... This paper... makes several improvements on the current literature by...” p. 99 Denigrating Getting Tough? The Impact of High School Graduation Exams Educational Evaluation and Policy Analysis, Summer 2001, Vol. 23, No. 2, pp. 99-121   Google cache of http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.407.5840&rep=rep1&type=pdf See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
66 Brian A. Jacob   The lack of empirical research on the achievement effects of mandatory graduation exams is striking, particularly in light of their growing popularity across the nation. The few studies that have examined the impact of MCT on student achievement tend to focus on younger children in low stakes testing environments.” p. 101 Dismissive Getting Tough? The Impact of High School Graduation Exams Educational Evaluation and Policy Analysis, Summer 2001, Vol. 23, No. 2, pp. 99-121   Google cache of http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.407.5840&rep=rep1&type=pdf See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
67 Brian A. Jacob   "Winfield (1990) … focuses exclusively on school-level exams whose relative salience for students is unclear. Controlling for a variety of individual, school, and regional variables (though not prior achievement), …" p. 101 Denigrating Getting Tough? The Impact of High School Graduation Exams Educational Evaluation and Policy Analysis, Summer 2001, Vol. 23, No. 2, pp. 99-121   Google cache of http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.407.5840&rep=rep1&type=pdf State-mandated exams are administered at the school level and thus are school-level exams. Moreover, it would appear that Winfield did contol for prior achievement, in her group of control variables she called "academic behaviors."    
68 Brian A. Jacob   “...the evidence on graduation exams and achievement is limited and mixed, ....” p. 101 Dismissive Getting Tough? The Impact of High School Graduation Exams Educational Evaluation and Policy Analysis, Summer 2001, Vol. 23, No. 2, pp. 99-121   Google cache of http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.407.5840&rep=rep1&type=pdf See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
69 Brian A. Jacob   Kreitzer, Madaus, and Haney (1989) . . . .note that the positive correlation between state graduation test requirements and dropout rates is thought provoking, but that there is no solid empirical evidence for a causal link between test policy and dropout rates.” pp. 101–102 Dismissive Getting Tough? The Impact of High School Graduation Exams Educational Evaluation and Policy Analysis, Summer 2001, Vol. 23, No. 2, pp. 99-121   Google cache of http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.407.5840&rep=rep1&type=pdf For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of testing on at-risk students, completion, dropping out, curricular offerings, attitudes, etc. include those of Schleisman (1999); the *Southern Regional Education Board (1998); Webster, Mendro, Orsak, Weerasinghe & Bembry (1997); Jones (1996); Boylan (1996); Jones, 1993; Jacobson (1992); Grisay (1991); Johnstone (1990); Task Force on Educational Assessment Programs [Florida] (1979); Wellisch, MacQueen, Carriere & Duck (1978); Enochs (1978); Pronaratna (1976); and McWilliams & Thomas (1976)."    
70 Brian A. Jacob   “More important, states and districts that implement such exams may have other policies or characteristics that act to reduce the probability of dropping out. Few studies have rigorously addressed this question.” p. 102 Dismissive Getting Tough? The Impact of High School Graduation Exams Educational Evaluation and Policy Analysis, Summer 2001, Vol. 23, No. 2, pp. 99-121   Google cache of http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.407.5840&rep=rep1&type=pdf For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of testing on at-risk students, completion, dropping out, curricular offerings, attitudes, etc. include those of Schleisman (1999); the *Southern Regional Education Board (1998); Webster, Mendro, Orsak, Weerasinghe & Bembry (1997); Jones (1996); Boylan (1996); Jones, 1993; Jacobson (1992); Grisay (1991); Johnstone (1990); Task Force on Educational Assessment Programs [Florida] (1979); Wellisch, MacQueen, Carriere & Duck (1978); Enochs (1978); Pronaratna (1976); and McWilliams & Thomas (1976)."    
71 Brian A. Jacob   "School reforms designed to hold students and teachers accountable for student achievement have become increasingly popular in recent years. Yet there is little empirical evidence on how such policies impact student or teacher behavior, or how they ultimately affect student achievement." Abstract Dismissive THE IMPACT OF HIGH-STAKES TESTING ON STUDENT ACHIEVEMENT: EVIDENCE FROM CHICAGO, June 2001   "Funding for this research was provided by the Spencer Foundation."   See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
72 Brian A. Jacob   "Despite the increasing popularity of high-stakes testing, there is little evidence on how such policies influence student or teacher behavior, or how they ultimately affect student achievement." p.3 Dismissive THE IMPACT OF HIGH-STAKES TESTING ON STUDENT ACHIEVEMENT: EVIDENCE FROM CHICAGO, June 2001   "Funding for this research was provided by the Spencer Foundation."   See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
73 Brian A. Jacob   "Chicago was one of the first large, urban school districts to implement a comprehensive high-stakes accountability policy. Beginning in 1996,..." p.3 1stness THE IMPACT OF HIGH-STAKES TESTING ON STUDENT ACHIEVEMENT: EVIDENCE FROM CHICAGO, June 2001   "Funding for this research was provided by the Spencer Foundation."   https://nonpartisaneducation.org/Foundation/ThinkTankThoughtlessness.htm ; per usual for such studies, there were no controls for any variation in test administration or security    
74 Brian A. Jacob   "While several studies found a positive association between student achievement and minimum competency testing (Bishop 1998, Frederisksen 1994, Neill 1998, Winfield, 1990), a recent study with better controls for prior student achievement finds no effect (Jacob 2001)." p.7 Denigrating THE IMPACT OF HIGH-STAKES TESTING ON STUDENT ACHIEVEMENT: EVIDENCE FROM CHICAGO, June 2001   "Funding for this research was provided by the Spencer Foundation."   See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
75 Brian A. Jacob   "Another alternative is to mandate particular curriculum programs or instructional practices. ... Unfortunately, there is little evidence that such mandated programmatic reforms have significant effects on student learning (Jacob and Lefgren 2001b)." p.36 Dismissive THE IMPACT OF HIGH-STAKES TESTING ON STUDENT ACHIEVEMENT: EVIDENCE FROM CHICAGO, June 2001   "Funding for this research was provided by the Spencer Foundation."   See, for example,   https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920  https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm For example, from Table 2, Chapter 3 of Correcting Fallacies: "The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above.  Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsak, Weeasinghe, and Bembry" For example, from Table 3, Chapter 3 of Correcting Fallacies: "Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978)."
                       
      Author cites (and accepts without checking) someone elses dismissive review                
      Cite selves or colleagues in the group, but dismiss or denigrate all other work                
      Falsely claim that research has only recently been done on topic.