Author Co-author(s) Dismissive Quote type Title Source Link1
1 Joan Herman 4 others "While the challenge of teachers’ content-pedagogical knowledge has been documented (Heritage et al., 2009; Heritage, Jones & White, 2010; Herman et al., 2010), few studies have examined the relationship between such knowledge and teachers’ assessment practices, nor examined how teachers’ knowledge may moderate the relationship between assessment practices and student learning." Dismissive Relationships between Teacher Knowledge, Assessment Practice, and Learning-Chicken, Egg, or Omelet? CRESST Report 809, November 2011  
2 Lorrie A. Shepard Kristen L. Davidson, Richard Bowman "Although some instruments, such as the Northwest Evaluation Association‘s (NWEA) Measures of Academic Progress (MAP®), have been around for decades, few studies have been conducted to examine the technical adequacy of interim assessments or to evaluate their effects on teaching and student learning."  Dismissive How Middle-School Mathematics Teachers Use Interim and Benchmark Assessment Data, p.2 CRESST Report 807, October 2011  
3 Marguerite Clarke   “The evidence base is stronger in some areas than in others. For example, there are many professional standards for assessment quality that ` be applied to classroom assessments, examinations, and large-scale assessments (APA, AERA, and NCME, 1999),17 but less professional or empirical research on enabling contexts.” p. 20 Dismissive Framework for Building an Effective Student Assessment System  World Bank, READ/SABER Working Paper, Aug. 2011  http://files.eric.ed.gov/fulltext/ED553178.pdf
4 Marguerite Clarke   “Data for some of these indicator areas can be found in official documents, published reports (for example, Ferrer, 2006), research articles (for example, Braun and Kanjee, 2005), and online databases. For the most part, data have not been gathered in any comprehensive or systematic fashion. Those wishing to review this type of information for a particular assessment system will most likely need to collect the data themselves.” p. 21 Denigrating Framework for Building an Effective Student Assessment System  World Bank, READ/SABER Working Paper, Aug. 2011  http://files.eric.ed.gov/fulltext/ED553178.pdf
5 Marguerite Clarke   “This paper has extracted principles and guidelines from countries’ experiences and the current research base to outline a framework for developing a more effective student assessment system. The framework provides policy makers and others with a structure for discussion and consensus building around priorities and key inputs for their assessment system.” p. 27 1rstness Framework for Building an Effective Student Assessment System  World Bank, READ/SABER Working Paper, Aug. 2011  http://files.eric.ed.gov/fulltext/ED553178.pdf
6 Laura S. Hamilton   "Despite the widespread enthusiasm for assessment-based reforms, many of the current and proposed uses of large-scale assessments are based on unverified assumptions about the extent to which they will actually lead to improved teaching and learning, and insufficient attention has been paid to the characteristics of assessment programs that are likely to promote desired outcomes." Denigrating Testing What Has Been Taught, p.47 American Educator, Winter 2010-2011  
7 Laura S. Hamilton   "Can assessments meaningfully be aligned to standards, … What would the key features of an assessment system designed to increase student learning and improve instruction be? While current assessment knowledge is not sufficient to fully answer these questions, in this article I offer an overview of what is known and several suggestions for improving our approach to assessment." Denigrating Testing What Has Been Taught, p.47 American Educator, Winter 2010-2011  
8 Laura S. Hamilton   "There is no research evidence to tell us definitively how to build an assessment system that will promote student learning and be resistent to the negative consequences that are common in high-stakes testing programs." Dismissive Testing What Has Been Taught, p.49 American Educator, Winter 2010-2011  
9 Laura S. Hamilton   Research on the effects of various assessment-design features is limited, so any effort that relies heavily on assessment as a tool for school improvement should be carried out with caution." Denigrating Testing What Has Been Taught, p.50 American Educator, Winter 2010-2011  
10 Laura S. Hamilton Brian M. Stecher, Kun Yuan “A few studies have attempted to examine how the creation and publication of standards, per se, have affected practices.” p. 3 Dismissive Standards-Based Reform in the United States: History, Research, and Future Directions Center on Education Policy, December, 2008 http://www.rand.org/content/dam/rand/pubs/reprints/2009/RAND_RP1384.pdf
11 Laura S. Hamilton Brian M. Stecher, Kun Yuan “The research evidence does not provide definitive answers to these questions.” p. 6 Denigrating Standards-Based Reform in the United States: History, Research, and Future Directions Center on Education Policy, December, 2008 http://www.rand.org/content/dam/rand/pubs/reprints/2009/RAND_RP1384.pdf
12 Laura S. Hamilton Brian M. Stecher, Kun Yuan “He [Poynter 1994] also noted that ‘virtually all of the arguments, both for and against standards, are based on beliefs and hypotheses rather than on direct empirical evidence’ (p. 427). Although a large and growing body of research has been conducted to examine the effects of SBR, the caution Porte rexpressed in 1994 about the lack of empirical evidence remains relevant today.” pp. 34-35 Dismissive Standards-Based Reform in the United States: History, Research, and Future Directions Center on Education Policy, December, 2008 http://www.rand.org/content/dam/rand/pubs/reprints/2009/RAND_RP1384.pdf
13 Laura S. Hamilton Brian M. Stecher, Kun Yuan “Arguably the most important test of quality is whether the standards promote high-quality instruction and improved student learning, but as we discuss later, there is very little research to address that question.” p. 37 Dismissive Standards-Based Reform in the United States: History, Research, and Future Directions Center on Education Policy, December, 2008 http://www.rand.org/content/dam/rand/pubs/reprints/2009/RAND_RP1384.pdf
14 Laura S. Hamilton Brian M. Stecher, Kun Yuan “[T]here have been a few studies of SBR as a comprehensive system. . . . [T]here is some research on how the adoption of standards, per se, or the alignment of standards with curriculum influences school practices or student outcomes.” p. 38 Dismissive Standards-Based Reform in the United States: History, Research, and Future Directions Center on Education Policy, December, 2008 http://www.rand.org/content/dam/rand/pubs/reprints/2009/RAND_RP1384.pdf
15 Laura S. Hamilton Brian M. Stecher, Kun Yuan “The lack of evidence about the effects of SBR derives primarily from the fact that the vision has never been fully realized in practice.” p. 47 Dismissive Standards-Based Reform in the United States: History, Research, and Future Directions Center on Education Policy, December, 2008 http://www.rand.org/content/dam/rand/pubs/reprints/2009/RAND_RP1384.pdf
16 Laura S. Hamilton Brian M. Stecher, Kun Yuan “[A]lthough many conceptions of SBR emphasize autonomy, we currently know relatively little about the effects of granting autonomy or what the right balance is between autonomy and prescriptiveness.” p. 55 Dismissive Standards-Based Reform in the United States: History, Research, and Future Directions Center on Education Policy, December, 2008 http://www.rand.org/content/dam/rand/pubs/reprints/2009/RAND_RP1384.pdf
17 Laura S. Hamilton Brian M. Stecher, Kun Yuan “One of the primary responsibilities of the federal government should be to ensure ongoing collection of evidence demonstrating the effects of the policies, which could be used to make decisions about whether to continue on the current course or whether small adjustments or a major overhaul are needed.” p. 55 Dismissive Standards-Based Reform in the United States: History, Research, and Future Directions Center on Education Policy, December, 2008 http://www.rand.org/content/dam/rand/pubs/reprints/2009/RAND_RP1384.pdf
18 Joan Herman   "What of the impact of accountability on other segments of the student population--traditionally higher performing students? ...The average student? ...there is no obvious accountability mechanism for the "average student. There is little research on this issue." Dismissive Accountability and assessment: Is public interest in K-12 education being served? CRESST Report 728, October 2007  
19 Robert L. Linn   "Despite the clear appeal of assessment-based accountability and the widespread use of this approach, the development of assessments that are aligned with content standards and for which there is solid evidence of validity and reliability is a challenging endeavor." Dismissive Issues in the Design of Accountability Systems CRESST Report 650, April 2005  
20 Robert L. Linn   "Alignment of an assessment with the content standards that it is intended to measure is critical if the assessment is to buttress rather than undermine the standards. Too little attention has been given to the evaluation of the alignment of assessments and standards." Denigrating Issues in the Design of Accountability Systems CRESST Report 650, April 2005  
21 Laura S. Hamilton Daniel F. McCaffrey, Lockwood, Daniel M. Koretz “The shortcomings of the studies make it difficult to determine the size of teacher effects, but we suspect that the magnitude of some of the effects reported in this literature are overstated.” p. xiii Denigrating Evaluating Value-Added Models for Teacher Accountability  Rand Corporation, 2003 https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf
22 Laura S. Hamilton Daniel F. McCaffrey, Lockwood, Daniel M. Koretz “Using VAM to estimate individual teacher effects is a recent endeavor, and many of the possible sources of error have not been thoroughly evaluated in the literature.” p. xix Dismissive Evaluating Value-Added Models for Teacher Accountability  Rand Corporation, 2003 https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf
23 Laura S. Hamilton Daniel F. McCaffrey, Lockwood, Daniel M. Koretz “Empirical evaluations do not exist for many of the potential sources of error we have identified. Studies need to be conducted to determine how these factors contribute to estimated teacher effects and to determine the conditions that exacerbate or mitigate the impact these factors have on teacher effects.” p. xix Dismissive Evaluating Value-Added Models for Teacher Accountability  Rand Corporation, 2003 https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf
24 Laura S. Hamilton Daniel F. McCaffrey, Lockwood, Daniel M. Koretz “This lack of attention to teachers in policy discussions may be attributed in part to another body of literature that attempted to determine the effects of specific teacher background characteristics, including credentialing status (e.g., Miller, McKenna, and McKenna, 1998; Goldhaber and Brewer, 2000) and subject matter coursework (e.g., Monk, 1994).” p. 8 Dismissive Evaluating Value-Added Models for Teacher Accountability  Rand Corporation, 2003 https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf
25 Laura S. Hamilton Daniel F. McCaffrey, Lockwood, Daniel M. Koretz “To date, there has been little empirical exploration of the size of school effects and the sensitivity of teacher effects to modeling of school effects.” p. 78 Dismissive Evaluating Value-Added Models for Teacher Accountability  Rand Corporation, 2003 https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf
26 Laura S. Hamilton Daniel F. McCaffrey, Lockwood, Daniel M. Koretz “There are no empirical explorations of the robustness of estimates to assumptions about prior-year schooling effects.“ p. 81 Dismissive Evaluating Value-Added Models for Teacher Accountability  Rand Corporation, 2003 https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf
27 Laura S. Hamilton Daniel F. McCaffrey, Lockwood, Daniel M. Koretz “There is currently no empirical evidence about the sensitivity of gain scores or teacher effects to such alternatives.” p. 89 Dismissive Evaluating Value-Added Models for Teacher Accountability  Rand Corporation, 2003 https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf
28 Laura S. Hamilton Daniel F. McCaffrey, Lockwood, Daniel M. Koretz “Empirical evaluations do not exist for many of the potential sources of error we have identified. Studies need to be conducted to determine how these factors contribute to estimated teacher effects and to determine the conditions that exacerbate or mitigate the impact these factors have on teacher effects.” p. 116 Dismissive Evaluating Value-Added Models for Teacher Accountability  Rand Corporation, 2003 https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf
29 Laura S. Hamilton Daniel F. McCaffrey, Lockwood, Daniel M. Koretz “Although we expect missing data are likely to be pervasive, there is little systematic discussion of the extent or nature of missing data in test score databases.” p. 117 Dismissive Evaluating Value-Added Models for Teacher Accountability  Rand Corporation, 2003 https://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf
30 Marguerite Clarke 5 co-authors “What this study adds to the body of literature in this area is a systematic look at how impact varies with the stakes attached to the test results.” p. 91 1stness Perceived Effects of State-Mandated Testing Programs on Teaching and Learning etc. (5 co-authors) National Board on Educational Testing and Public Policy monograph, January 2003 http://files.eric.ed.gov/fulltext/ED474867.pdf
31 Marguerite Clarke 5 co-authors “Many calls for school reform assert that high-stakes testing will foster the economic competitiveness of the U.S. However, the empirical basis for this claim is weak.” p. 96, n. 1 Denigrating Perceived Effects of State-Mandated Testing Programs on Teaching and Learning etc. (5 co-authors) National Board on Educational Testing and Public Policy monograph, January 2003 http://files.eric.ed.gov/fulltext/ED474867.pdf
32 Brian M. Stecher Laura S. Hamilton, Stephen P. Klein, Eds. "High-stakes testing may also affect parents (e.g., their attitudes toward education, their engagement with schools, and their direct participation in their child's learning) as well as policymakers (their beliefs about system performance, their judgements about program effectiveness, and their allocation of resources). However, these issues remain largely unexamined in the literature." Dismissive Consequences of large-scale, high-stakes testing on school and classroom practice Chapter 4 in Making sense of test-based accountability in education, 2002, p.79 https://www.rand.org/content/dam/rand/pubs/monograph_reports/2002/MR1554.pdf
33 Brian M. Stecher Laura S. Hamilton, Stephen P. Klein, Eds. "As described in chapter 2, there was little concern about the effects of testing on teaching prior to the 1970s." Dismissive Consequences of large-scale, high-stakes testing on school and classroom practice Chapter 4 in Making sense of test-based accountability in education, 2002, p.81 https://www.rand.org/content/dam/rand/pubs/monograph_reports/2002/MR1554.pdf
34 Brian M. Stecher Laura S. Hamilton, Stephen P. Klein, Eds. "In light of the changes that occurred in the uses of large-scale testing in the 1980s and 1990s, researchers began to investigate teachers' reactions to external assessment. The initial research on the impact of large-scale testing was conducted in the 1980s and the 1990s." Dismissive Consequences of large-scale, high-stakes testing on school and classroom practice Chapter 4 in Making sense of test-based accountability in education, 2002, p.83 https://www.rand.org/content/dam/rand/pubs/monograph_reports/2002/MR1554.pdf
35 Brian M. Stecher Laura S. Hamilton, Stephen P. Klein, Eds. "Researchers have not documented the desirable consequences of testing … as clearly as the undesirable ones. More importantly, researchers have not generally measured the extent or magnitude of the shifts in practice that they identified as a result of high-stakes testing." Dismissive Consequences of large-scale, high-stakes testing on school and classroom practice Chapter 4 in Making sense of test-based accountability in education, 2002, pp.99–100 https://www.rand.org/content/dam/rand/pubs/monograph_reports/2002/MR1554.pdf
36 Daniel M. Koretz Daniel F. McCaffrey, Laura S. Hamilton "Few efforts are made to evaluate directly score gains obtained under high-stakes conditions, and conventional validation tools are not fully adequate for the task.", p. 1 Dismissive Toward a framework for validating gains under high-stakes conditions CSE Technical Report 551, CRESST/Harvard Graduate School of Education, CRESST/RAND Education, December 2001  
37 Marguerite Clarke George F. Madaus “[T]here has been no analogous infrastructure for independently evaluating a testing program before or after implementation, or for monitoring test use and impact.” p. 19 Dismissive The Adverse Impact of High Stakes Testing on Minority Students: Evidence from 100 Years of Test Data In G. Orfield and M. Kornhaber (Eds.), Raising standards or raising barriers? Inequality and high stakes testing in public education. New York: The Century Foundation (2001) http://files.eric.ed.gov/fulltext/ED450183.pdf
38 Marguerite Clarke George F. Madaus “The effects of testing are now so diverse, widespread, and serious that it is necessary to establish mechanisms for catalyzing inquiry about, and systematic independent scrutiny of them.” p. 20 Dismissive The Adverse Impact of High Stakes Testing on Minority Students: Evidence from 100 Years of Test Data In G. Orfield and M. Kornhaber (Eds.), Raising standards or raising barriers? Inequality and high stakes testing in public education. New York: The Century Foundation (2001) http://files.eric.ed.gov/fulltext/ED450183.pdf
39 Ronald Deitel   "In the late 1980s, CRESST was among the first to research the measurement of rigorous, discipline-based knowledge for purposes of large-scale assessment." 1stness Center for Research on Evaluation, Standards, and Student Testing (CRESST) clarify the goals and activities of CRESST EducationNews.org, November 18, 2000  
40 Marguerite Clarke Madaus, Horn, and Ramos “[F]or most of this century, there has been no infrastructure for independently evaluating a testing programme before or after implementation, or for monitoring test use and impact. The commercial testing industry does not as yet have any structure in place for the regulation and monitoring of appropriate test use.” p. 177 Dismissive Retrospective on Educational Testing and Assessment in the 20th Century Curriculum Studies, 2000, vol. 32, no. 2, http://webpages.uncc.edu/~rglamber/Rsch6109%20Materials/HistoryAchTests_3958652.pdf
41 Marguerite Clarke Madaus, Horn, and Ramos “Given the paucity of evidence available on the volume of testing over time, we examined five indirect indicators of growth in testing. . . .” p. 169 Dismissive Retrospective on Educational Testing and Assessment in the 20th Century Curriculum Studies, 2000, vol. 32, no. 2 http://webpages.uncc.edu/~rglamber/Rsch6109%20Materials/HistoryAchTests_3958652.pdf
42 Lorrie A. Shepard   "This portrayal derives mostly from research leading to Wood and Bruner’s original conception of scaffolding, from Vygotskian theory, and from naturalistic studies of effective tutoring described next. Relatively few studies have been undertaken in which explicit feedback interventions have been tried in the context of constructivist instructional settings." Dismissive The Role of Classroom Assessment in Teaching and Learning, p.59 CSE Technical Report 517, February 2000  
43 Lorrie A. Shepard   "The NCTM and NRC visions are idealizations based on beliefs about constructivist pedagogy and reflective practice. Although both are supported by examples of individual teachers who use assessment to improve their teaching, little is known about what kinds of support would be required to help large numbers of teachers develop these strategies or to ensure that teacher education programs prepared teachers to use assessment in these ways. Research is needed to address these basic implementation questions." Dismissive The Role of Classroom Assessment in Teaching and Learning, p.64 CSE Technical Report 517, February 2000  
44 Lorrie A. Shepard   "This social-constructivist view of classroom assessment is an idealization. The new ideas and perspectives underlying it have a basis in theory and empirical studies, but how they will work in practice and on a larger scale is not known." Dismissive The Role of Classroom Assessment in Teaching and Learning, p.67 CSE Technical Report 517, February 2000  
45 Marguerite Clarke Madaus, Pedulla, and Shore “The National Board believes that we must as a nation conduct research that helps testing contribute to student learning, classroom practice, and state and district management of school resources.” p. 2 Dismissive An Agenda for Research on Educational Testing NBETPP Statements, Vol. 1, No. 1, Jan. 2000 http://files.eric.ed.gov/fulltext/ED456137.pdf
46 Marguerite Clarke Madaus, Pedulla, and Shore “Validity research on teacher testing needs to address the following four issues in particular. . .” : [four bullet-point paragraphs follow] p. 3 Dismissive An Agenda for Research on Educational Testing NBETPP Statements, Vol. 1, No. 1, Jan. 2000 http://files.eric.ed.gov/fulltext/ED456137.pdf
47 Marguerite Clarke Madaus, Pedulla, and Shore “[W]e need to understand better the relationship between testing and the diversity of the college student body.” p. 6 Dismissive An Agenda for Research on Educational Testing NBETPP Statements, Vol. 1, No. 1, Jan. 2000 http://files.eric.ed.gov/fulltext/ED456137.pdf
48 Marguerite Clarke Haney, Madaus We trust that further research will build on this good example and help all of us move from suggestive correlational studies towards more definitive conclusions.” p. 9 1stness High Stakes Testing and High School Completion NBETPP Statements, Volume 1, Number 3, Jan. 2000 http://files.eric.ed.gov/fulltext/ED456139.pdf
49 Eva L. Baker Zenaida Aguirre-Munoz "The extent and nature of the impact of language skills on performance assessments remains elusive due to the paucity of research in this area." Dismissive Improving the equity and validity of assessment-based information systems, p.3 CSE Technical Report 462, December 1997  
50 Mary Lee Smith 11 others "The purpose of the research described in this report is to understand what happens in the aftermath of a change in state assessment policy that is designed to improve schools and make them more accountable to a set of common standards. Although theoretical and rhetorical works about this issue are common in the literature, empirical evidence is novel and scant." Dismissive Reforming schools by reforming assessment: Consequences of the Arizona Student Assessment Program (ASAP): Equity and teacher capacity building, p.3 CSE Technical Report 425, March 1997  
51 Eva L. Baker Robert L. Linn, Joan L. Herman "How do we assure accurate placement of students with varying abilities and language capabilities? There is little research to date to guide policy and practice (August, et al., 1994)." Dismissive CRESST: A Continuing Mission to Improve Educational Assessment, p.12 Evaluation Comment, Summer 1996  
52 Eva L. Baker Robert L. Linn, Joan L. Herman "Alternative assessments are needed for these students (see Kentucky Portfolios for Special Education, Kentucky Department of Education, 1995). Although promising, there has been little or no research investigating the validity of inferences from these adaptations or alternatives." Dismissive CRESST: A Continuing Mission to Improve Educational Assessment, p.13 Evaluation Comment, Summer 1996  
53 Mary Lee Smith 5 others "This study also draws on previous research on the role of mandated testing. …The question unanswered by extant research is whether assessments that differ in form from the traditional, norm- or criterion-referenced standardized tests would produce similar reactions and effects." Dismissive What Happens When the Test Mandate Changes? Results of a Multiple Case Study CSE Technical Report 380, July 1994  
54 Laura S. Hamilton   “Despite the number of studies investigating affective aspects of test taking, little is known about how students perceive the kinds of extended performance assessments currently being developed for state and local testing programs.” - Abstract Denigrating An Investigation of Students' Affective Responses to Alternative Assessment Formats Paper presented at the Annual Meeting of the National Council on Measurement in Education (New Orleans, LA, April 5-7, 1994) http://files.eric.ed.gov/fulltext/ED376203.pdf
55 Laura S. Hamilton   “As stated earlier, this study was not intended to produce results that could be generalized to other tasks or to other samples of students, but to identify questions that might be addressed by future studies and to suggest possible hypotheses.” p. 23 Dismissive An Investigation of Students' Affective Responses to Alternative Assessment Formats Paper presented at the Annual Meeting of the National Council on Measurement in Education (New Orleans, LA, April 5-7, 1994) http://files.eric.ed.gov/fulltext/ED376203.pdf
56 Robert L. Linn Vonda L. Kiplinger "Although much has been written on achievement motivation per se, there has been surprisingly little empirical research on the effects of different motivation conditions on test performance. Before examining the paucity of research on the relationship of motivation and test performance....?" Dismissive Raising the stakes of test administration: The impact on student performance on NAEP, p.3 CSE Technical Report 360, March 3, 1993  
57 Lorrie A. Shepard   "Research evidence on the effects of traditional standardized tests when used as high-stakes accountability instruments is strikingly negative." Denigrating Will National Tests Improve Student Learning?, pp.15-16 CSE Technical Report 342  
58 Daniel M. Koretz Robert L. Linn, Stephen Dunbar, Lorrie A. Shepard “Evidence relevant to this debate has been limited.” p. 2 Dismissive The Effects of High-Stakes Testing On Achievement: Preliminary Findings About Generalization Across Tests  Originally presented at the annual meeting of the AERA and the NCME, Chicago, April 5, 1991 http://nepc.colorado.edu/files/HighStakesTesting.pdf
59 Jennie P. Yeh Joan L. Herman "Testing in American schools is increasing in both scope and visibility. … What return are we getting for this quite considerable investment? Little information is available. How are tests used in schools? What functions to test serve in classrooms?", p.1 Dismissive Teachers and testing: A survey of test use CSE Report No. 166, 1981  
60 Joan L. Herman James Burry, Don Dorr-Bremme, Charlotte M. Lazar-Morrison, James D. Lehman, Jennie P. Yeh "Despite the great controversy that surrounds testing and its potential uses and abuses, there is little empirical information available about the nature of testing as it actually occurs and is used (or not used) in schools. The Test Use Project at the Center for the Study of Evaluation seeks to fill this gap and answer basic questions about tests and schooling.", p.2 Dismissive Teching and testing: Allies or adversaries CSE Report No. 165, 1981  
61 Charlotte Lazar-Morrison Linda Polin, Raymond Moy, James Burry "There is little research-based information about current testing practice." Dismissive A review of the literature on test use, p.3 CSE Report No. 144, August 1980  
62 Charlotte Lazar-Morrison Linda Polin, Raymond Moy, James Burry "Today, there still remains a plethora of publications on these very issues and a dearth of empirical support on actual test use practices." Dismissive A review of the literature on test use, p.3 CSE Report No. 144, August 1980  
63 Charlotte Lazar-Morrison Linda Polin, Raymond Moy, James Burry "Although much has been written about minimum competency issues, there has yet to be any report of the actual uses or extent of the use of competency-based tests." Dismissive A review of the literature on test use, p.7 CSE Report No. 144, August 1980  
64 Charlotte Lazar-Morrison Linda Polin, Raymond Moy, James Burry "The literature on curriculum-embedded tests is equally scant." Dismissive A review of the literature on test use, p.8 CSE Report No. 144, August 1980  
             
IRONIES:            
Eva L. Baker Robert L. Linn, Joan L. Herman "Diverse perspectives are needed to clarify real differences and to find equitable, workable balances."   CRESST: A Continuing Mission to Improve Educational Assessment, p.13 Evaluation Comment, Summer 1996  
Eva L. Baker Robert L. Linn, Joan L. Herman "Impartiality, not advocacy, is the key to the credibility of research and development."   CRESST: A Continuing Mission to Improve Educational Assessment, p.13 Evaluation Comment, Summer 1996  
Cite selves or colleagues in the group, but dismiss or denigrate all other work
Falsely claim that research has only recently been done on topic.