HOME:  Dismissive Reviews in Education Policy Research            
  Author Co-author(s) Dismissive Quote type Title Source Funders Link1 Note
1 Jennifer L. Jennings Douglas Lee Lauen "Despite the ongoing public debate about the meaning of state test score gains, no study has examined the impact of accountability pressure from NCLB on multiple tests taken by the same students." 1stness Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning, p.222 The Russell Sage Foundation Journal of the Social Sciences, Volume 2, Number 5, September 2016, pp. 220-241   https://muse.jhu.edu/article/633744 As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
2 Jennifer L. Jennings Douglas Lee Lauen "Still, little is known about the effects of accountability pressure across demographic groups on multiple measures of student learning; addressing this gap is one goal of our study." Dismissive Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning, p. 223 The Russell Sage Foundation Journal of the Social Sciences, Volume 2, Number 5, September 2016, pp. 220-241   https://muse.jhu.edu/article/633744 As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
3 Jennifer L. Jennings Douglas Lee Lauen "In sum, all of the studies described here establish positive average effects of NCLB beyond state tests but do not assess the generalizability of state test gains to other measures of achievement. Our study…" 1stness Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning, p. 223 The Russell Sage Foundation Journal of the Social Sciences, Volume 2, Number 5, September 2016, pp. 220-241   https://muse.jhu.edu/article/633744 As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
4 Jennifer L. Jennings Douglas Lee Lauen "Our study contributes to a small but growing literature examining the relationship between school-based responses to accountability pressure and student performance on multiple measures of learning, which requires student-level data and test scores from multiple exams." Dismissive Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning, p. 223 The Russell Sage Foundation Journal of the Social Sciences, Volume 2, Number 5, September 2016, pp. 220-241   https://muse.jhu.edu/article/633744 As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
5 Jennifer L. Jennings Douglas Lee Lauen "Only one study has examined the effect of accountability pressure on multiple tests, but this study is from the pre-NCLB era. Jacob (2005) used item-level data to better understand the mechanisms underlying differential gains across tests." Dismissive Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning, p. 223 The Russell Sage Foundation Journal of the Social Sciences, Volume 2, Number 5, September 2016, pp. 220-241   https://muse.jhu.edu/article/633744 As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
6 Jennifer L. Jennings Douglas Lee Lauen "While the studies reviewed here have established the effects of accountability systems on outcomes, they have devoted less attention to studying heterogeneity in how educators perceive external pressures and react to them. Because the lever for change in accountability systems is educational improvement in response to external pressure, this is an important oversight." Denigrating Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning, p. 224 The Russell Sage Foundation Journal of the Social Sciences, Volume 2, Number 5, September 2016, pp. 220-241   https://muse.jhu.edu/article/633744 As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
7 Jennifer L. Jennings Douglas Lee Lauen "A unique feature of this study is the availability of multiple test scores for each student— both the Texas Assessment of Knowledge and Skills (TAKS) and the Stanford Achievement Test battery." 1stness Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning, p. 225 The Russell Sage Foundation Journal of the Social Sciences, Volume 2, Number 5, September 2016, pp. 220-241   https://muse.jhu.edu/article/633744 As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
8 Jennifer L. Jennings Douglas Lee Lauen "Whether focusing on predictable content is a desirable practice depends on the relevance of each standard to the inference one wants to make from state test scores. State policymakers may believe that some standards are more important than others and explicitly build such guidance into their instructions to test designers. However, we are aware of no states that provided guidance to test firms at the individual standard level during the NCLB era; ultimately, testing contractors have made these decisions." Dismissive Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning The Russell Sage Foundation Journal of the Social Sciences, Volume 2, Number 5, September 2016, pp. 220-241   https://muse.jhu.edu/article/633744 As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
9 Jennifer L. Jennings Douglas Lee Lauen "We believe that our study provides the best available evidence about the effects of accountability pressure on multiple tests in the NCLB era, ..." p.238 Denigrating Accountability, Inequality, and Achievement: The Effects of the No Child Left Behind Act on Multiple Measures of Student Learning The Russell Sage Foundation Journal of the Social Sciences, Volume 2, Number 5, September 2016, pp. 220-241   https://muse.jhu.edu/article/633744 As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
10 David J. Deming Sarah Cohodes, Jennifer Jennings, Christopher Jencks "In fact, we know very little about the impact of test-based accountability on students’ later success." Dismissive When does accountability work? Education Next, WINTER 2016 / VOL. 16, NO. 1 Harvard Kennedy School; Thomas B. Fordham Foundation & Institute http://educationnext.org/when-does-accountability-work-texas-system/  
11 David J. Deming Sarah Cohodes, Jennifer Jennings, Christopher Jencks "In this study, we present the first evidence of how accountability pressure on schools influences students’ long-term outcomes." 1stness When does accountability work? Education Next, WINTER 2016 / VOL. 16, NO. 1 Harvard Kennedy School; Thomas B. Fordham Foundation & Institute http://educationnext.org/when-does-accountability-work-texas-system/  
12 David J. Deming Sarah Cohodes, Jennifer Jennings, Christopher Jencks "What we don’t know is: Do these improvements on high-stakes tests represent real learning gains? " Dismissive When does accountability work? Education Next, WINTER 2016 / VOL. 16, NO. 1 Harvard Kennedy School; Thomas B. Fordham Foundation & Institute http://educationnext.org/when-does-accountability-work-texas-system/  
13 David J. Deming Sarah Cohodes, Jennifer Jennings, Christopher Jencks "Our study overcomes the limits of short-term analysis by asking: when schools face accountability pressure, do their efforts to raise test scores generate improvements in higher education attainment, earnings, and other long-term outcomes?" Denigrating When does accountability work? Education Next, WINTER 2016 / VOL. 16, NO. 1 Harvard Kennedy School; Thomas B. Fordham Foundation & Institute http://educationnext.org/when-does-accountability-work-texas-system/  
14 Sean P. Corcoran Jennifer L. Jennings "Second, we have limited evidence on the extent to which teachers' short-run effects on achievement correspond to long-term impacts on achievement, attainment, and well-being (Chetty, Rockoff, and Friedman, 2011)." Dismissive Teacher effectiveness on high- and low-stakes tests  NYU Steinhardt School of Culture, Education, and Human Development, New York "We  would  like  to  thank  ... the IES pre-doctoral training program for providing research support.  Jennings received additional support for  this  project  from  IES-AERA  and  Spencer  Foundation  dissertation  fellowships." https://www.nyu.edu/projects/corcoran/papers/Corcoran_Jennings_Houston_Teacher_Effects.pdf  
15 Sean P. Corcoran Jennifer L. Jennings "Comparatively less attention has been given to the outcome measure itself. While some studies have examined the role test scaling plays in value-added, (e.g., Ballou, 2009; Briggs and Weeks, 2009; Koedel and Betts, 2009), fewer have validated teacher effects against other short- or long-run outcomes of interest." Dismissive Teacher effectiveness on high- and low-stakes tests  NYU Steinhardt School of Culture, Education, and Human Development, New York "We  would  like  to  thank  ... the IES pre-doctoral training program for providing research support.  Jennings received additional support for  this  project  from  IES-AERA  and  Spencer  Foundation  dissertation  fellowships." https://www.nyu.edu/projects/corcoran/papers/Corcoran_Jennings_Houston_Teacher_Effects.pdf  
16 Jennifer L. Jennings Heeju Sohn "Our study is the first to bring together these two issues and isolate the relevance of proficiency standard difficulty for inequality in academic achievement on both high- and low-stakes tests." 1stness Measure for Measure: How Proficiency-based Accountability Systems Affect Inequality in Academic Achievement Sociology of Education, 87(2) 125–141 "The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305AII0420, and by the Spencer Foundation, through Grants 201100075 and 201200071, to the President and Fellows of Harvard College. Sohn’s work was supported by the National Institute of Child Health and Human Development, NIH through Grant 5T32 HD-007242."    
17 Jennifer L. Jennings Jonathan Marc Bearak "Despite the ongoing public debate about the meaning of state test score gains under NCLB, no study has attempted to quantify the extent to which NCLB-era state tests had features that enabled teaching to the test." p.381 Dismissive “Teaching to the Test” in the NCLB Era: How Test Predictability Affects Our Understanding of Student Performance Educational Researcher, 43(8), 381–389 (2014) "Funding for this study was provided by the Spencer Foundation (Grant/Award Nos. 201100075 and 201200071) and the Institute for Education Sciences (Grant/Award No. R305AII0420)." https://files.eric.ed.gov/fulltext/EJ1044311.pdf As usual with Koretz-Hamilton-Jennings studies, variation in test administrations are completely ignored, as if they could not possibly be relevant, and variation in test content are mostly wished away, as if it did not matter, either. They make apples to oranges comparisons. Meanwhile, the large relevant experimental research literature is declared nonexistent.
18 Jennifer L. Jennings Jonathan Marc Bearak "Nor have previous papers attempted to clarify the concept of teaching to the test," p.381 Denigrating “Teaching to the Test” in the NCLB Era: How Test Predictability Affects Our Understanding of Student Performance Educational Researcher, 43(8), 381–389 (2014) "Funding for this study was provided by the Spencer Foundation (Grant/Award Nos. 201100075 and 201200071) and the Institute for Education Sciences (Grant/Award No. R305AII0420)." https://files.eric.ed.gov/fulltext/EJ1044311.pdf As usual with Koretz-Hamilton-Jennings studies, variation in test administrations are completely ignored, as if they could not possibly be relevant, and variation in test content are mostly wished away, as if it did not matter, either. They make apples to oranges comparisons. Meanwhile, the large relevant experimental research literature is declared nonexistent.
19 Jennifer L. Jennings Jonathan Marc Bearak "Our study is one of the first to empirically test for a specific opportunity for teaching to the test in NCLB-era tests—predictability—and to estimate whether predictability is associated with improved performance on these items." p.381 1stness “Teaching to the Test” in the NCLB Era: How Test Predictability Affects Our Understanding of Student Performance Educational Researcher, 43(8), 381–389 (2014) "Funding for this study was provided by the Spencer Foundation (Grant/Award Nos. 201100075 and 201200071) and the Institute for Education Sciences (Grant/Award No. R305AII0420)." https://files.eric.ed.gov/fulltext/EJ1044311.pdf As usual with Koretz-Hamilton-Jennings studies, variation in test administrations are completely ignored, as if they could not possibly be relevant, and variation in test content are mostly wished away, as if it did not matter, either. They make apples to oranges comparisons. Meanwhile, the large relevant experimental research literature is declared nonexistent.
20 Jennifer L. Jennings Jonathan Marc Bearak "Our study is the only one of which we are aware that identifies and tests for a specific mechanism of teaching to the test in multiple states during the NCLB era." p.386 1stness “Teaching to the Test” in the NCLB Era: How Test Predictability Affects Our Understanding of Student Performance Educational Researcher, 43(8), 381–389 (2014) "Funding for this study was provided by the Spencer Foundation (Grant/Award Nos. 201100075 and 201200071) and the Institute for Education Sciences (Grant/Award No. R305AII0420)." https://files.eric.ed.gov/fulltext/EJ1044311.pdf As usual with Koretz-Hamilton-Jennings studies, variation in test administrations are completely ignored, as if they could not possibly be relevant, and variation in test content are mostly wished away, as if it did not matter, either. They make apples to oranges comparisons. Meanwhile, the large relevant experimental research literature is declared nonexistent.
21 Daniel M. Koretz Holcombe, Jennings “To date, few studies have attempted to understand the sources of variation in score inflation across testing programs.” p. 3 Dismissive The roots of score inflation, an examination of opportunities in two states’ tests  Prepublication draft “to appear in Sunderman (Ed.), Charting reform: achieving equity in a diverse nation   http://dash.harvard.edu/bitstream/handle/1/10880587/roots%20of%20score%20inflation.pdf?sequence=1 As usual with Koretz-Hamilton-Jennings studies, variation in test administrations are completely ignored, as if they could not possibly be relevant, and variation in test content are mostly wished away, as if it did not matter, either. They make apples to oranges comparisons. Meanwhile, the large relevant experimental research literature is declared nonexistent.
22 Jennifer L. Jennings Heeju Sohn "Unlike existing studies, however, we are able to evaluate how proficiency-based accountability systems affect two outcomes: state tests used for accountability (“high-stakes tests”) and a second measure of achievement in which the stakes are low for individual teachers and schools (hereafter, “low-stakes tests”)." p.3 Denigrating Measure for Measure: How Proficiency-Based Accountability Systems Affect Inequality in Academic Achievement Sociology of Education 2014 April ; 87(2)     As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
23 Jennifer L. Jennings Heeju Sohn "While a number of studies have considered the average effects of state accountability systems (Carnoy and Loeb 2002; Dee and Jacob 2011; Hanushek and Raymond 2004; Hout and Elliott 2011; Wong, Cook, and Steiner 2009), whether these systems consistently help lower-, average-, and high-performing students is less clear." p.3 Dismissive, Denigrating Measure for Measure: How Proficiency-Based Accountability Systems Affect Inequality in Academic Achievement Sociology of Education 2014 April ; 87(2)     As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
24 Jennifer L. Jennings Heeju Sohn "Our study is the first to bring together these two issues and isolate the relevance of proficiency standard difficulty for inequality in academic achievement on both high and low-stakes tests." p.6 1stness Measure for Measure: How Proficiency-Based Accountability Systems Affect Inequality in Academic Achievement Sociology of Education 2014 April ; 87(2)     As usual with Koretz-Hamilton-Jennings studies, there are no controls for test administration or test content factors. The authors claim that they cannot match test content between the two tests--the Stanford and Texas' TAKS--because the Stanford is proprietary. So, they cite an earlier content match study of a different grade with an earlier Texas test, the TAAS (p.226). Even then, common standards (not items) represent only 61% of the item pool. The authors are comparing apples and oranges.
25 David J. Deming  Sarah Cohodes, Jennifer Jennings, Christopher Jencks "However, more than a decade after the passage of NCLB, we know very little about the impact of test-based accountability on students’ long-run life chances. Dismissive SCHOOL ACCOUNTABILITY, POSTSECONDARY ATTAINMENT AND EARNINGS, p.2 NATIONAL BUREAU OF ECONOMIC RESEARCH, Working Paper 19444, September 2013 NBER Funders https://scholar.harvard.edu/files/ddeming/files/w19444.pdf  
26 David J. Deming  Sarah Cohodes, Jennifer Jennings, Christopher Jencks "Previous work has found large gains on high-stakes tests, with some evidence of smaller gains on low-stakes exams that is inconsistent across grades and subjects (e.g. Koretz and Barron 1998, Linn 2000, Klein et al. 2000, Carnoy and Loeb 2002, Hanushek and Raymond 2005, Jacob 2005, Wong, Cook and Steiner 2009, Dee and Jacob 2010, Reback, Rockoff and Schwartz 2011). Dismissive SCHOOL ACCOUNTABILITY, POSTSECONDARY ATTAINMENT AND EARNINGS, p.2 NATIONAL BUREAU OF ECONOMIC RESEARCH, Working Paper 19444, September 2013 NBER Funders https://scholar.harvard.edu/files/ddeming/files/w19444.pdf  
27 David J. Deming  Sarah Cohodes, Jennifer Jennings, Christopher Jencks "...Previous research has focused on measuring performance on low-stakes exams." Dismissive SCHOOL ACCOUNTABILITY, POSTSECONDARY ATTAINMENT AND EARNINGS, p.4 NATIONAL BUREAU OF ECONOMIC RESEARCH, Working Paper 19444, September 2013 NBER Funders https://scholar.harvard.edu/files/ddeming/files/w19444.pdf "Previous research" within their small group of colleagues has been this narrow. Outside their group, it's a different story. 
28 David J. Deming  Sarah Cohodes, Jennifer Jennings, Christopher Jencks "The literature on school accountability has focused on low-stakes tests, in an attempt to measure whether gains on high-stakes exams represent generalizable gains in student learning. Recent studies of accountability in multiple states have found achievement gains across subjects and grades on low-stakes exams (Ladd 1999, Carnoy and Loeb 2002, Greene and Winters 2003, Hanushek and Raymond 2005, Figlio and Rouse 2006, Chiang 2009, Dee and Jacob 2010, Wong, Cook and Steiner 2011, Allen and Burgess 2012)." Dismissive SCHOOL ACCOUNTABILITY, POSTSECONDARY ATTAINMENT AND EARNINGS, p.6 NATIONAL BUREAU OF ECONOMIC RESEARCH, Working Paper 19444, September 2013 NBER Funders https://scholar.harvard.edu/files/ddeming/files/w19444.pdf "The literature on school accountability" within their small group of colleagues has had this narrow focus. Outside their group, it's a different story.
29 David J. Deming  Sarah Cohodes, Jennifer Jennings, Christopher Jencks "To our knowledge, only two studies look at the long-term impact of school accountability on postsecondary outcomes. Dismissive SCHOOL ACCOUNTABILITY, POSTSECONDARY ATTAINMENT AND EARNINGS, p.7 NATIONAL BUREAU OF ECONOMIC RESEARCH, Working Paper 19444, September 2013 NBER Funders https://scholar.harvard.edu/files/ddeming/files/w19444.pdf  
30 Daniel M. Koretz Jennifer L. Jennings  “We find that research on the use of test score data is limited, and research investigating the understanding of tests and score data is meager.” p. 1 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities/ Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
31 Daniel M. Koretz Jennifer L. Jennings “Because of the sparse research literature, we rely on experience and anecdote in parts of this paper, with the premise that these conclusions should be supplanted over time by findings from systematic research.” p. 1 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
32 Daniel M. Koretz Jennifer L. Jennings "...the relative performance of schools is difficult to interpret in the presence of score inflation. At this point, we know very little about the factors that may predict higher levels of inflation —for example, characteristics of tests, accountability systems, students, or schools." p.4 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
33 Daniel M. Koretz Jennifer L. Jennings “We focus on three issues that are especially relevant to test-based data and about which research is currently sparse:
  How do the types of data made available for use affect policymakers’ and educators’ understanding of data?
  What are the common errors made by policymakers and educators in interpreting test score data?
  How do high-stakes testing and the availability of test-based data affect administrator and teacher practice? (p. 5)
Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
34 Daniel M. Koretz Jennifer L. Jennings Systematic research exploring educators’ understanding of both the principles of testing and appropriate interpretation of test-based data is meager.”, p.5 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
35 Daniel M. Koretz Jennifer L. Jennings "Although current, systematic information is lacking, our experience is that that the level of understanding of test data among both educators and education policymakers is in many cases abysmally low.", p.6 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
36 Daniel M. Koretz Jennifer L. Jennings "There has been a considerably (sic) amount of research exploring problems with standards-based reporting, but less on the use and interpretation of standards-based data by important stakeholders." p.12 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
37 Daniel M. Koretz Jennifer L. Jennings "We have heard former teachers discuss this frequently, saying that new teachers in many schools are inculcated with the notion that raising scores in tested subjects is in itself the appropriate goal of instruction. However, we lack systematic data about this..." p.14 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
38 Daniel M. Koretz Jennifer L. Jennings "Research on score inflation is not abundant, largely for the reason discussed above: policymakers for the most part feel no obligation to allow the relevant research, which is not in their self-interest even when it is in the interests of students in schools. However, at this time, the evidence is both abundant enough and sufficiently often discussed that that the existence of the general issue of score inflation appears to be increasingly widely recognized by the media, policymakers, and educators." p.17 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities In fact the test prep, or test coaching, literature is vast and dates back decades, with meta-analyses of the literature dating back at least to the 1970s. There's even a What Works Clearinghouse summary of the (post World Wide Web) college admission test prep research literature:  https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_act_sat_100416.pdf . See also:  Ortar (1960)  Marron (1965)  ETS (1965). Messick & Jungeblut (1981)  Ellis, Konoske, Wulfeck, & Montague (1982)  DerSimonian and Laird (1983)  Kulik, Bangert-Drowns & Kulik (1984)  Powers (1985)  Jones (1986). Fraker (1986/1987)  Halpin (1987)  Whitla (1988)  Snedecor (1989)  Bond (1989). Baydar (1990)  Becker (1990)  Smyth (1990)  Moore (1991)  Alderson & Wall (1992)  Powers (1993)  Oren (1993). Powers & Rock (1994)  Scholes, Lane (1997)   Allalouf & Ben Shakhar (1998)  Robb & Ercanbrack (1999)  McClain (1999)  Camara (1999, 2001, 2008) Stone & Lane (2000, 2003)  Din & Soldan (2001)  Briggs (2001)  Palmer (2002)  Briggs & Hansen (2004)  Cankoy & Ali Tut (2005)  Crocker (2005)  Allensworth, Correa, & Ponisciak (2008)  Domingue & Briggs (2009)  Koljatic & Silva (2014)  Early (2019)
39 Daniel M. Koretz Jennifer L. Jennings "The issue of score inflation is both poorly understood and widely ignored in the research community as well." p.18 Denigrating The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities In fact the test prep, or test coaching, literature is vast and dates back decades, with meta-analyses of the literature dating back at least to the 1970s. There's even a What Works Clearinghouse summary of the (post World Wide Web) college admission test prep research literature:  https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_act_sat_100416.pdf . See also:  Ortar (1960)  Marron (1965)  ETS (1965). Messick & Jungeblut (1981)  Ellis, Konoske, Wulfeck, & Montague (1982)  DerSimonian and Laird (1983)  Kulik, Bangert-Drowns & Kulik (1984)  Powers (1985)  Jones (1986). Fraker (1986/1987)  Halpin (1987)  Whitla (1988)  Snedecor (1989)  Bond (1989). Baydar (1990)  Becker (1990)  Smyth (1990)  Moore (1991)  Alderson & Wall (1992)  Powers (1993)  Oren (1993). Powers & Rock (1994)  Scholes, Lane (1997)   Allalouf & Ben Shakhar (1998)  Robb & Ercanbrack (1999)  McClain (1999)  Camara (1999, 2001, 2008) Stone & Lane (2000, 2003)  Din & Soldan (2001)  Briggs (2001)  Palmer (2002)  Briggs & Hansen (2004)  Cankoy & Ali Tut (2005)  Crocker (2005)  Allensworth, Correa, & Ponisciak (2008)  Domingue & Briggs (2009)  Koljatic & Silva (2014)  Early (2019)
40 Daniel M. Koretz Jennifer L. Jennings "Research on coaching is very limited." p.21 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities In fact the test prep, or test coaching, literature is vast and dates back decades, with meta-analyses of the literature dating back at least to the 1970s. There's even a What Works Clearinghouse summary of the (post World Wide Web) college admission test prep research literature:  https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_act_sat_100416.pdf . See also:  Ortar (1960)  Marron (1965)  ETS (1965). Messick & Jungeblut (1981)  Ellis, Konoske, Wulfeck, & Montague (1982)  DerSimonian and Laird (1983)  Kulik, Bangert-Drowns & Kulik (1984)  Powers (1985)  Jones (1986). Fraker (1986/1987)  Halpin (1987)  Whitla (1988)  Snedecor (1989)  Bond (1989). Baydar (1990)  Becker (1990)  Smyth (1990)  Moore (1991)  Alderson & Wall (1992)  Powers (1993)  Oren (1993). Powers & Rock (1994)  Scholes, Lane (1997)   Allalouf & Ben Shakhar (1998)  Robb & Ercanbrack (1999)  McClain (1999)  Camara (1999, 2001, 2008) Stone & Lane (2000, 2003)  Din & Soldan (2001)  Briggs (2001)  Palmer (2002)  Briggs & Hansen (2004)  Cankoy & Ali Tut (2005)  Crocker (2005)  Allensworth, Correa, & Ponisciak (2008)  Domingue & Briggs (2009)  Koljatic & Silva (2014)  Early (2019)
41 Daniel M. Koretz Jennifer L. Jennings "How is test-based information used by educators? … The types of research done to date on this topic, while useful, are insufficient." p.26 Denigrating The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
42 Daniel M. Koretz Jennifer L. Jennings … We need to design ways of measuring coaching, which has been almost entirely unstudied." p.26 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities In fact the test prep, or test coaching, literature is vast and dates back decades, with meta-analyses of the literature dating back at least to the 1970s. There's even a What Works Clearinghouse summary of the (post World Wide Web) college admission test prep research literature:  https://ies.ed.gov/ncee/wwc/Docs/InterventionReports/wwc_act_sat_100416.pdf . See also:  Ortar (1960)  Marron (1965)  ETS (1965). Messick & Jungeblut (1981)  Ellis, Konoske, Wulfeck, & Montague (1982)  DerSimonian and Laird (1983)  Kulik, Bangert-Drowns & Kulik (1984)  Powers (1985)  Jones (1986). Fraker (1986/1987)  Halpin (1987)  Whitla (1988)  Snedecor (1989)  Bond (1989). Baydar (1990)  Becker (1990)  Smyth (1990)  Moore (1991)  Alderson & Wall (1992)  Powers (1993)  Oren (1993). Powers & Rock (1994)  Scholes, Lane (1997)   Allalouf & Ben Shakhar (1998)  Robb & Ercanbrack (1999)  McClain (1999)  Camara (1999, 2001, 2008) Stone & Lane (2000, 2003)  Din & Soldan (2001)  Briggs (2001)  Palmer (2002)  Briggs & Hansen (2004)  Cankoy & Ali Tut (2005)  Crocker (2005)  Allensworth, Correa, & Ponisciak (2008)  Domingue & Briggs (2009)  Koljatic & Silva (2014)  Early (2019)
43 Daniel M. Koretz Jennifer L. Jennings  “We have few systematic studies of variations in educators’ responses. …” p. 26 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
44 Daniel M. Koretz Jennifer L. Jennings "Ultimately, our concern is the impact of educators’ understanding and use of test data on student learning. However, at this point, we have very little comparative information about the validity of gains, ....  The comparative information that is beginning to emerge suggests..." p.26 Dismissive The Misunderstanding and Use of Data from Educational Tests  Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010 Spencer Foundation http://www.spencer.org/data-use-and-educational-improvement-initiative-activities Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335.
                   
  IRONIES:                
  Daniel M. Koretz Jennifer L. Jennings "Unfortunately, it is often exceedingly difficult to obtain the permission and access needed to carry out testing-related research in the public education sector. This is particularly so if the research holds out the possibility of politically inconvenient findings, which virtually all evaluations in this area do. In our experience, very few state or district superintendents or commissioners consider it an obligation to provide thepublic or the field with open and impartial research. Data are considered proprietary—a position that the restrictions imposed by the federal Family Educational Rights and Privacy Act (FERPA) have made easier to maintain publicly. Access is usually provided only for research which is not seen as unduly threatening to the leaders’ immediate political agendas. The fact that this last consideration is often openly discussed underscores the lack of a culture of public accountability."   The Misunderstanding and Use of Data from Educational Tests, pp.4-5 Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010   http://www.spencer.org/data-use-and-educational-improvement-initiative-activities/ Externally administered high-stakes testing is widely reviled among US educationists. It strains credulity that Koretz can not find one district out of the many thousands to cooperate with him to discredit testing.
  Daniel M. Koretz Jennifer L. Jennings "This unwillingness to countenance honest but potentially threatening research garners very little discussion, but in this respect, education is an anomaly. In many areas of public policy, such as drug safety or vehicle safety, there is an expectation that the public is owed honest and impartial evaluation and research. For example, imagine what would have happed if the CEO of Merck had responded to reports of side-effects from Vioxx by saying that allowing access to data was “not our priority at present,” which is a not infrequent response to data requests made to districts or states. In public education, there is no expectation that the public has a right to honest evaluation, and data are seen as the policymakers’ proprietary sandbox, to which they can grant access when it happens to serve their political needs."   The Misunderstanding and Use of Data from Educational Tests, p.5 Prepared for Spencer Foundation meetings, Chicago, IL, February 11, 2010. Revised November 21, 2010   http://www.spencer.org/data-use-and-educational-improvement-initiative-activities/  
                   
      Cite selves or colleagues in the group, but dismiss or denigrate all other work            
      Falsely claim that research has only recently been done on topic.            
      Author cites (and accepts as fact without checking) someone else's dismissive review