| HOME: Dismissive Reviews in Education Policy Research | |||||||||||
| Author | Co-author(s) | Dismissive Quote | type | Title | Source | Link1 | Funders | Notes | Notes2 | ||
| 1 | Lawrence
O. Picus, Frank Adamson, William Montague, and Margaret Owens |
Linda Darling-Hammond, Frank Adamson, Eds. | "In 1993, a GAO study … Unfortunately, aggregating these different types of time disguises important differences between them that have emerged in the NCLB era as more important considerations than in previous decades." pp. 251-252 | Dismissive, Denigrating | "A New Conceptual Framework for Cost Analysis" | Chapter 8 in Beyond the Bubble Test: How Performance Assessments Support 21st Century Learning, 2014 | "Reseaarch for this volume was supported by the Ford Foundation, the Hewlett Foundation, the Nellie Mae Educational Foundation, and the Sandler Foundation to whom we are grateful" p. viii | This is nonsense. Educators are paid the same no matter how they spend their time. Once again, Picus insinsuates something must be wrong with the GAO study, suggesting that he is justified in ignoring its findings and presenting those from his much inferior studies alone. Also, see below. | |||
| 2 | Lawrence
O. Picus, Frank Adamson, William Montague, and Margaret Owens |
Linda Darling-Hammond, Frank Adamson, Eds. | "In their 2002 work on test-based accountability, Hamilton, Stecher, and Klein point out that while improved testing systems are likely to cost more money, few good estimates of the costs of improved accountability systems in relation to their benefits have been developed. More than a decade later, little has changed." p.251 | Dismissive | "A New Conceptual Framework for Cost Analysis" | Chapter 8 in Beyond the Bubble Test: How Performance Assessments Support 21st Century Learning, 2014 | "Reseaarch for this volume was supported by the Ford Foundation, the Hewlett Foundation, the Nellie Mae Educational Foundation, and the Sandler Foundation to whom we are grateful" p. viii | The 1993 GAO study, which I updated in 2000, collected data from
the universe of states with testing programs and a very large, representative
sample (> 660) of public school districts. We collected all the data on
all the systemwide testing occurring at the time. We oversampled districts in
certain states, such as Maryland, the one state at the time with the most
elaborate performance test types. In doing that, we did more than he ever did
in his couple of state studies. Yet, as usual, he implies that the GAO study
or my work must have left out something important. That GAO study was vastly superior to anything that Larry Picus has done on the topic. Yet, even twenty years later, he continues to falsely insinuate faults and, by implication, claim his paltry studies as superior. Among other petty behaviors of his over a twenty year span, he refuses to include any citations or references with my name included. That way, he gives his readers no clue or link to sources that will contradict what he writes about the GAO study, my defense of it, and my subsequent work on the topic. |
|||
| 3 | Lawrence
O. Picus, Frank Adamson, William Montague, and Margaret Owens |
Linda Darling-Hammond, Frank Adamson, Eds. | "Despite the growing importance of assessments in our education system, relatively little is known about the economic costs and benefits of these assessments that are such a large part of every student's educational experience." p.239 | Dismissive | "A New Conceptual Framework for Cost Analysis" | Chapter 8 in Beyond the Bubble Test: How Performance Assessments Support 21st Century Learning, 2014 | "Reseaarch for this volume was supported by the Ford Foundation, the Hewlett Foundation, the Nellie Mae Educational Foundation, and the Sandler Foundation to whom we are grateful" p. viii | The
1993 GAO study, which I updated in 2000, collected data from the universe of
states with testing programs and a very large, representative sample (>
660) of public school districts. We collected all the data on all the
systemwide testing occurring at the time. We oversampled districts in certain
states, such as Maryland, the one state at the time with the most elaborate
performance test types. In doing that, we did more than he ever did in his
couple of state studies. Yet, as usual, he implies that the GAO study or my
work must have left out something important. That GAO study was vastly superior to anything that Larry Picus has done on the topic. Yet, even twenty years later, he continues to falsely insinuate faults and, by implication, claim his paltry studies as superior. Among other petty behaviors of his over a twenty year span, he refuses to include any citations or references with my name included. That way, he gives his readers no clue or link to sources that will contradict what he writes about the GAO study, my defense of it, and my subsequent work on the topic. |
|||
| 4 | Michael Hout, Stuart W. Elliot, Editors | Paul Hill, Thomas J. Kane, Daniel M. Koretz, Susanna Loeb, Lorrie A. Shepard, Brian Stecher, et al. | "Unfortunately, there were no other studies available that would have allowed us to contrast the overall effect of state incentive programs predating NCLB…" p. 4-6 | Dismissive | Incentives and Test-Based Accountability in Education, 2011 | Board on Testing and Assessment, National Research Council | https://www.nap.edu/catalog/12521/incentives-and-test-based-accountability-in-education | National Research Council funders; "The study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation" | Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); and Hurlock (1925). *Covers many studies; study is a research review, research synthesis, or meta-analysis. Other researchers who, prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. | "Others have considered the role of tests in incentive
programs. These researchers have
included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin,
Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International
organizations, such as the World Bank or the Asian Development Bank, have
studied the effects of testing on education programs they sponsor. Researchers have included Somerset,
Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman,
Snyder, and Pronaratna. Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones." |
What
about: Brooks-Cooper, C. (1993),
Brown, S. M. & Walberg, H. J. (1993), Heneman, H. G., III. (1998),
Hurlock, E. B. (1925), Jones, J. et al. (1996), Kazdin, A. & Bootzin, R.
(1972), Kelley, C. (1999), Kirkpatrick, J. E. (1934), O’Leary, K. D. & Drabman, R. (1971), Palmer, J. S. (2002), Richards, C. E. & Shen, T. M. (1992), .Rosswork, S. G. (1977), Staats, A. (1973), Tuckman, B. W. (1994), Tuckman, B. W. & Trimble, S. (1997), Webster, W. J., Mendro, R. L., Orsack, T., Weerasinghe, D. & Bembry, K. (1997) |
| 5 | Michael Hout, Stuart W. Elliot, Editors | Paul Hill, Thomas J. Kane, Daniel M. Koretz, Susanna Loeb, Lorrie A. Shepard, Brian Stecher, et al. | "Test-based incentive programs, as designed and implemented in the programs that have been carefully studied have not increased student achievement enough to bring the United States close to the levels of the highest achieving countries.", p. 4-26 | Denigrating | Incentives and Test-Based Accountability in Education, 2011 | Board on Testing and Assessment, National Research Council | https://www.nap.edu/catalog/12521/incentives-and-test-based-accountability-in-education | National Research Council funders; "The study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation" | Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); and Hurlock (1925). *Covers many studies; study is a research review, research synthesis, or meta-analysis. Other researchers who, prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. | Others have considered the role of tests in incentive
programs. These researchers have
included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin,
Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International
organizations, such as the World Bank or the Asian Development Bank, have
studied the effects of testing on education programs they sponsor. Researchers have included Somerset,
Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman,
Snyder, and Pronaratna. Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones. |
What
about: Brooks-Cooper, C. (1993),
Brown, S. M. & Walberg, H. J. (1993), Heneman, H. G., III. (1998),
Hurlock, E. B. (1925), Jones, J. et al. (1996), Kazdin, A. & Bootzin, R.
(1972), Kelley, C. (1999), Kirkpatrick, J. E. (1934), O’Leary, K. D. & Drabman, R. (1971), Palmer, J. S. (2002), Richards, C. E. & Shen, T. M. (1992), .Rosswork, S. G. (1977), Staats, A. (1973), Tuckman, B. W. (1994), Tuckman, B. W. & Trimble, S. (1997), Webster, W. J., Mendro, R. L., Orsack, T., Weerasinghe, D. & Bembry, K. (1997) |
| 6 | Michael Hout, Stuart W. Elliot, Editors | Paul Hill, Thomas J. Kane, Daniel M. Koretz, Susanna Loeb, Lorrie A. Shepard, Brian Stecher, et al. | "Despite using them for several decades, policymakers and educators do not yet know how to use test-based incentives to consistently generate positive effects on achievement and to improve education." p .5-1 | Dismissive | Incentives and Test-Based Accountability in Education, 2011 | Board on Testing and Assessment, National Research Council | https://www.nap.edu/catalog/12521/incentives-and-test-based-accountability-in-education | National Research Council funders; "The study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation" | Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); and Hurlock (1925). *Covers many studies; study is a research review, research synthesis, or meta-analysis. Other researchers who, prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. | "Others have considered the role of tests in incentive
programs. These researchers have
included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin,
Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International
organizations, such as the World Bank or the Asian Development Bank, have
studied the effects of testing on education programs they sponsor. Researchers have included Somerset,
Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman,
Snyder, and Pronaratna. Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones." |
What
about: Brooks-Cooper, C. (1993),
Brown, S. M. & Walberg, H. J. (1993), Heneman, H. G., III. (1998),
Hurlock, E. B. (1925), Jones, J. et al. (1996), Kazdin, A. & Bootzin, R.
(1972), Kelley, C. (1999), Kirkpatrick, J. E. (1934), O’Leary, K. D. & Drabman, R. (1971), Palmer, J. S. (2002), Richards, C. E. & Shen, T. M. (1992), .Rosswork, S. G. (1977), Staats, A. (1973), Tuckman, B. W. (1994), Tuckman, B. W. & Trimble, S. (1997), Webster, W. J., Mendro, R. L., Orsack, T., Weerasinghe, D. & Bembry, K. (1997) |
| 7 | Michael Hout, Stuart W. Elliot, Editors | Paul Hill, Thomas J. Kane, Daniel M. Koretz, Susanna Loeb, Lorrie A. Shepard, Brian Stecher, et al. | "The general lack of guidance coming from existing studies of test-based incentive programs in education…" | Dismissive | Incentives and Test-Based Accountability in Education, 2011 | Board on Testing and Assessment, National Research Council | https://www.nap.edu/catalog/12521/incentives-and-test-based-accountability-in-education | National Research Council funders; "The study was sponsored by Carnegie Corporation of New York and the William and Flora Hewlett Foundation" | Relevant studies of the effects of varying types of incentive or the optimal structure of incentives include those of Kelley (1999); the *Southern Regional Education Board (1998); Trelfa (1998); Heneman (1998); Banta, Lund, Black & Oblander (1996); Brooks-Cooper, 1993; Eckstein & Noah (1993); Richards & Shen (1992); Jacobson (1992); Heyneman & Ransom (1992); *Levine & Lezotte (1990); Duran, 1989; *Crooks (1988); *Kulik & Kulik (1987); Corcoran & Wilson (1986); *Guskey & Gates (1986); Brook & Oxenham (1985); Oxenham (1984); Venezky & Winfield (1979); Brookover & Lezotte (1979); McMillan (1977); Abbott (1977); *Staats (1973); *Kazdin & Bootzin (1972); *O’Leary & Drabman (1971); Cronbach (1960); and Hurlock (1925). *Covers many studies; study is a research review, research synthesis, or meta-analysis. Other researchers who, prior to 2000, studied test-based incentive programs include Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin, Staats, Cameron, Pierce, McMillan, Corcoran, Roueche, Kirk, Wheeler, Boylan, and Wilson. | "Others have considered the role of tests in incentive
programs. These researchers have
included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin,
Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International
organizations, such as the World Bank or the Asian Development Bank, have
studied the effects of testing on education programs they sponsor. Researchers have included Somerset,
Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman,
Snyder, and Pronaratna. Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones." |
What
about: Brooks-Cooper, C. (1993),
Brown, S. M. & Walberg, H. J. (1993), Heneman, H. G., III. (1998),
Hurlock, E. B. (1925), Jones, J. et al. (1996), Kazdin, A. & Bootzin, R.
(1972), Kelley, C. (1999), Kirkpatrick, J. E. (1934), O’Leary, K. D. & Drabman, R. (1971), Palmer, J. S. (2002), Richards, C. E. & Shen, T. M. (1992), .Rosswork, S. G. (1977), Staats, A. (1973), Tuckman, B. W. (1994), Tuckman, B. W. & Trimble, S. (1997), Webster, W. J., Mendro, R. L., Orsack, T., Weerasinghe, D. & Bembry, K. (1997) |
| 8 | Diana Pullin (Chair) |
Joan Herman, Scott Marion, Dirk Mattson, Rebecca Maynard, Mark Wilson, | "However, there have been very few studies of how interim assessments are actually used by individual teachers in classrooms, by principals, and by districts or of their impact on student achievement." p. 6 | Dismissive | Best Practices for State Assessment Systems, Part I | Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards; Center for Education; Division of Behavioral and Social Sciences and Education; National Research Council | https://www.nap.edu/catalog/12906/best-practices-for-state-assessment-systems-part-i-summary-of | "With funding from the James B. Hunt, Jr. Institute for Educational Leadership and Policy, as well as additional support from the Bill & Melinda Gates Foundation and the Stupski Foundation, the National Research Council (NRC) planned two workshops designed to explore some of the possibilities for state assessment systems." | See, for example: https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm | ||
| 9 | Diana Pullin (Chair) |
Joan Herman, Scott Marion, Dirk Mattson, Rebecca Maynard, Mark Wilson, | "Research indicates that the result has been emphasis on lower-level knowledge and skills and very thin alignment with the standards. For example, Porter, Polikoff, and Smithson (2009) found very low to moderate alignment between state assessments and standards—meaning that large proportions of content standards are not covered on the assessments (see also Fuller et al., 2006; Ho, 2008). p. 10 | Denigrating | Best Practices for State Assessment Systems, Part I | Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards; Center for Education; Division of Behavioral and Social Sciences and Education; National Research Council | https://www.nap.edu/catalog/12906/best-practices-for-state-assessment-systems-part-i-summary-of | "With funding from the James B. Hunt, Jr. Institute for Educational Leadership and Policy, as well as additional support from the Bill & Melinda Gates Foundation and the Stupski Foundation, the National Research Council (NRC) planned two workshops designed to explore some of the possibilities for state assessment systems." | Pretty difficult to believe given how standards-based test items are developed -- directly from the standards. | ||
| 10 | Diana Pullin (Chair) |
Joan Herman, Scott Marion, Dirk Mattson, Rebecca Maynard, Mark Wilson, | "Another issue is that the implications of computer-based approaches for validity and reliability have not been thoroughly evaluated." p. 40 | Dismissive | Best Practices for State Assessment Systems, Part I | Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards; Center for Education; Division of Behavioral and Social Sciences and Education; National Research Council | https://www.nap.edu/catalog/12906/best-practices-for-state-assessment-systems-part-i-summary-of | "With funding from the James B. Hunt, Jr. Institute for Educational Leadership and Policy, as well as additional support from the Bill & Melinda Gates Foundation and the Stupski Foundation, the National Research Council (NRC) planned two workshops designed to explore some of the possibilities for state assessment systems." | |||
| 11 | Diana Pullin (Chair) |
Joan Herman, Scott Marion, Dirk Mattson, Rebecca Maynard, Mark Wilson, | "For current tests, he [Lauress Wise] observed, there is little evidence that they are good indicators of instructional effectiveness or good predictors of students’ readiness for subsequent levels of instruction." | Dismissive | Best Practices for State Assessment Systems, Part I | Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards; Center for Education; Division of Behavioral and Social Sciences and Education; National Research Council | https://www.nap.edu/catalog/12906/best-practices-for-state-assessment-systems-part-i-summary-of | "With funding from the James B. Hunt, Jr. Institute for Educational Leadership and Policy, as well as additional support from the Bill & Melinda Gates Foundation and the Stupski Foundation, the National Research Council (NRC) planned two workshops designed to explore some of the possibilities for state assessment systems." | |||
| 12 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"For example, early advocates of standards-based reforms were reacting against previous efforts focused on minimum competencies (such as balancing a checkbook) that had done little to improve the quality of instruction or student learning." p.1 | Denigrating | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | ||||
| 13 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"But, the report cautioned that extrapolating from small-scale, intensive studies to full-system reform was an unprecedented task—one that would require significant investments in teacher professional development and ongoing evaluations to improve the system." p.2 | 1stness | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | Scale-ip unprecedented? Yet, it was done in earlier reforms in the United States, and in the other countries the authors claim have done things right. | |||
| 14 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"… the development process for standards has often left out more complex, discipline-based expertise about how knowledge, skills, and conceptual understanding can be developed together in a mutually reinforcing way." p.3 | Denigrating | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | Translation: some jurisdictions have the temerity to develop standards without hiring any of them | |||
| 15 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"It is important to recognize that broad content standards, at least as developed thus far in the United States, do not have the specificity of curricula as typically developed in other countries, where there is greater clarity about the depth of coverage and the appropriate sequencing of topics." p.3 | Denigrating | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | Here they complain about a lack of specificity in US standards. In the next paragraph they complain about too much specificity and a lack of depth -- "mile wide and inch deep" | |||
| 16 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"Although NCLB alignment requirements were intended to correct the “mile wide and inch deep” curriculum problems identified by TIMSS researchers, the most recent research indicates that these problems are largely unabated." p.3 | Denigrating | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | Here they complain about too much specificity and a lack of depth -- "mile wide and inch deep." In the previous paragraph they complain about a lack of specificity in US standards. | |||
| 17 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"In the most comprehensive study completed since NCLB, test items were compared to state content standards in each of nine states in mathematics and English, language arts, and reading, and in seven states for science." p.6 | 1stness | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | ||||
| 18 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"Research on effective schools, for example, documents that schools with a sense of common purpose and emphasis on academics can produce student achievement well above demographic predictions. But, this research often relied on case studies of exceptional schools." p.6 | Denigrating | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | But, often it focused on ordinary schools.The research base on effective school is gargantuan in number and size. | |||
| 19 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"Although state standards have been in use since the late 1980s, and scholarly work on progressions has made significant strides in recent years, there has been little attention in the United States to incorporating the most up-to-date thinking about cognition and learning progressions into curriculum materials and assessments." pp.7-8 | Dismissive, Denigrating | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | ||||
| 20 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"There is at present no direct way to measure changes in instruction that would withstand the requirements of high-stakes use. Although research on classroom observational instruments is limited, a great deal is known from cognitive science research." p.13 | Dismissive | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | There exist an enormous number of such observational studies. See, for example: https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm | |||
| 21 | L. Shepard, J. Hannaway, E. Baker (Eds.) | Patricia Gandara, Drew Gitomer, Margaret Goertz, Helen Ladd,
Robert Linn, P. David Pearson, Diane Ravitch, William Schmidt, Alan
Schoenfeld, David Stern, William Trent, Mark Wilson |
"Ten states currently use end-of-course exams for accountability purposes, and other states have plans to implement them. These exams differ significantly, however, from the course assessment systems in most high achieving nations. In contrast to most end-of-course tests in the United States, high school assessments in Australia, Finland, Hong Kong, the Netherlands, Singapore, Sweden, and the United Kingdom—among others—are generally developed by high school and college faculty." p. | Denigrating | Education Policy White Paper: Standards, Assessments, and Accountability, 2009 | National Academcy of Education | https://files.eric.ed.gov/fulltext/ED531138.pdf | That's exactly how they are developed in the US, too, with groups of high school and college faculty. | |||
| 22 | Douglas N. Harris | Lori L. Taylor, Amy A. Levine, William K. Ingle, Leslie McDonald | "However, previous studies understate current costs by focusing on costs before NCLB was put in place and by excluding important cost categories." | Denigrating | The Resource Costs of Standards, Assessments, and Accountability | report to the National Research Council | National Research Council funders | No, they did not leave out important cost categories; Harris' study deliberately exagerates costs. See pages 3-10: https://nonpartisaneducation.org/Review/Essays/v10n1.pdf | |||
| 23 | Jay P. Heubert | "For Heubert, it is very much an open question what the effect of standards and high-stakes testing will be." p.83 | Dismissive | Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | See, for example, https://www.tandfonline.com/doi/full/10.1080/15305058.2011.602920 ; https://nonpartisaneducation.org/Review/Resources/SurveyList.htm ; https://nonpartisaneducation.org/Review/Resources/QualitativeList.htm; https://nonpartisaneducation.org/Review/Resources/QuantitativeList.htm | Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930). | |||
| 24 | Jay P. Heubert | ""There is little evidence to suggest that exit exams in current use have been validated properly against the defined curriculum and actual instruction; rather, it appears that many states have not taken adequate steps to validate their assessment instruments, and the proper studies would reveal important weaknesses." pp.83-84 | Dismissive | Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | Relevant studies of the effects of tests and/or accountability program on motivation and instructional practice include those of the *Southern Regional Education Board (1998); Johnson (1998); Schafer, Hultgren, Hawley, Abrams Seubert & Mazzoni (1997); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); Tuckman & Trimble (1997); Clarke & Stephens (1996); Zigarelli (1996); Stevenson, Lee, et al. (1995); Waters, Burger & Burger (1995); Egeland (1995); Prais (1995); Tuckman (1994); Ritchie & Thorkildsen (1994); Brown & Walberg, (1993); Wall & Alderson (1993); Wolf & Rapiau (1993); Eckstein & Noah (1993); Chao-Qun & Hui (1993); Plazak & Mazur (1992); Steedman (1992); Singh, Marimutha & Mukjerjee (1990); *Levine & Lezotte (1990); O’Sullivan (1989); Somerset (1988); Pennycuick & Murphy (1988); Stevens (1984); Marsh (1984); Brunton (1982); Solberg (1977); Foss (1977); *Kirkland (1971); Somerset (1968); Stuit (1947); and Keys (1934). *Covers many studies; study is a research review, research synthesis, or meta-analysis. | "Others have considered the role of tests in incentive
programs. These researchers have
included Homme, Csanyi, Gonzales, Rechs, O’Leary, Drabman, Kaszdin, Bootzin,
Staats, Cameron, Pierce, McMillan, Corcoran, and Wilson. International
organizations, such as the World Bank or the Asian Development Bank, have
studied the effects of testing on education programs they sponsor. Researchers have included Somerset,
Heynemann, Ransom, Psacharopoulis, Velez, Brooke, Oxenham, Bude, Chapman,
Snyder, and Pronaratna. Moreover, the mastery learning/mastery testing experiments conducted from the 1960s through today varied incentives, frequency of tests, types of tests, and many other factors to determine the optimal structure of testing programs. Researchers included such notables as Bloom, Carroll, Keller, Block, Burns, Wentling, Anderson, Hymel, Kulik, Tierney, Cross, Okey, Guskey, Gates, and Jones." |
|||
| 25 | CHRISTOPHER EDLEY, JR. | "To be sure, there is a largely unexamined empirical assertion underlying the arguments of high-stakes proponents: attaching high-stakes consequences for the students provides an indispensable, otherwise unobtainable incentive for students, parents, and teachers to pay careful attention to learning tasks. For the countless parents, policy makers, and observers who approach these debates as instrumentalists, the accuracy of this assertion is a central mystery as we struggle to close the education gap." p.128 | Dismissive | "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | Relevant studies of the effects of tests and/or accountability program on motivation and instructional practice include those of the *Southern Regional Education Board (1998); Johnson (1998); Schafer, Hultgren, Hawley, Abrams Seubert & Mazzoni (1997); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); Tuckman & Trimble (1997); Clarke & Stephens (1996); Zigarelli (1996); Stevenson, Lee, et al. (1995); Waters, Burger & Burger (1995); Egeland (1995); Prais (1995); Tuckman (1994); Ritchie & Thorkildsen (1994); Brown & Walberg, (1993); Wall & Alderson (1993); Wolf & Rapiau (1993); Eckstein & Noah (1993); Chao-Qun & Hui (1993); Plazak & Mazur (1992); Steedman (1992); Singh, Marimutha & Mukjerjee (1990); *Levine & Lezotte (1990); O’Sullivan (1989); Somerset (1988); Pennycuick & Murphy (1988); Stevens (1984); Marsh (1984); Brunton (1982); Solberg (1977); Foss (1977); *Kirkland (1971); Somerset (1968); Stuit (1947); and Keys (1934). *Covers many studies; study is a research review, research synthesis, or meta-analysis. | ||||
| 26 | CHRISTOPHER EDLEY, JR. | "There has been too little attention in policy and political debates to the rate of school improvement." p.130 | Dismissive | "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | |||||
| 27 | CHRISTOPHER EDLEY, JR. | "Yet, curiously, there is little public debate and little research about the rate of change we should require of school reform efforts in order to win the continuing support of voters and taxpayers." p.130 | Dismissive | "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | |||||
| 28 | CHRISTOPHER EDLEY, JR. | "Certainly much research remains to be done—conceptualized, even—in the continuing effort to give educators and parents the insights needed to promote learning." p131 | Dismissive | "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | |||||
| 29 | CHRISTOPHER EDLEY, JR. | "For
many serious policy analysts, the choice issue is uninteresting because there is so little good science to digest, the methodological challenges seem all but imponderable, and
purists insist that there should belarge-scale randomized experiments, which
seem impossible on practical grounds. The few
studies to date have feuled a firestorm of
controversy out of proportion to the available evidence." p.133 |
Dismissive | "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | |||||
| 30 | CHRISTOPHER EDLEY, JR. | "Looking to the future, this situation must not stand. … We must have research of sufficient quantity and quality to match the growing challenge that this represents in so many communities." p.136 | Dismissive | "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | |||||
| 31 | CHRISTOPHER EDLEY, JR. | "The policy and political question is how much weight to accord them in light of the science. The science is too thin." p.137 | Dismissive | "Education Reform in Context: Research, Politics, and Civil Rights" Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | |||||
| 32 | Michael A. Rebell | "Even though democratic theory in the United States in recent decades has extolled the concept of the informed citizen, there has, in fact, been little discussion, let alone analysis, of the specific skills individuals need to carry out the functions of such a citizen." p.244 | Dismissive | "Educational Adequacy, Democracy, and the Courts," Achieving High Standards for All | National Research Council | "This project was funded by grant R215U990023 from the Office of Educational Research and Improvement (OERI) of the United States Department of Education." | |||||
| 33 | Richard J. Shavelson & Lisa Towne, Eds. | "there are a number of areas in education practice and policy in which basic theoretical understanding is weak. For example, very little is known about how young children learn ratio and proportion." p.124 | Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | " … our sponsor, the U.S. Department of Education’s National Educational Research Policy and Priorities Board." | ||||||
| 34 | Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) | Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard, | "Despite their importance and widespread use, little is known about the impact of these tests on states’ recent efforts to improve teaching and learning." | Dismissive | Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.14 | Committee on Assessment and Teacher Quality | Board on Testing and Assessment, National Research Council | Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent. | |||
| 35 | Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) | Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard, | "Little information about the technical soundness of teacher licensure tests appears in the published literature." | Dismissive | Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.14 | Committee on Assessment and Teacher Quality | Board on Testing and Assessment, National Research Council | Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent. | |||
| 36 | Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) | Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard, | "Little research exists on the extent to which licensure tests identify candidates with the knowledge and skills necessary to be minimally competent beginning teachers." | Dismissive | Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.14 | Committee on Assessment and Teacher Quality | Board on Testing and Assessment, National Research Council | Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent. | |||
| 37 | Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) | Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard, | "Information is needed about the soundness and technical quality of the tests that states use to license their teachers." | Dismissive | Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.14 | Committee on Assessment and Teacher Quality | Board on Testing and Assessment, National Research Council | Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent. | |||
| 38 | Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) | Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard, | "policy and practice on teacher licensure testing in the United States are nascent and evolving" | Dismissive | Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.17 | Committee on Assessment and Teacher Quality | Board on Testing and Assessment, National Research Council | Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent. | |||
| 39 | Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) | Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard, | "The paucity of data and these methodological challenges made the committee’s examination of teacher licensure testing difficult." | Dismissive | Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.17 | Committee on Assessment and Teacher Quality | Board on Testing and Assessment, National Research Council | Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent. | |||
| 40 | Karen J. Mitchell, David Z. Robinson, Barbara S. Plake, & Kaeli T. Knowles (Eds.) | Linda Darling-Hammond, Stephen P. Klein, Eva L. Baker, Lorraine McDonnell, Lauress L. Wise, Daniel M. Koretz, Loretta A. Shepard, | "There were a number of questions the committee wanted to answer but could not, either because they were beyond the scope of this study, the evidentiary base was inconclusive, or the committee’s time and resources were insufficient." | Dismissive | Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality, 2001, p.17 | Committee on Assessment and Teacher Quality | Board on Testing and Assessment, National Research Council | Every stage of test development, administration, and analysis at National Evaluation Systems—the contractors for dozens of states' teacher licensure tests—was thoroughly documented. But, instead of requesting that documentation from each state, which owned said documentation, the NRC committee insisted that NES provide it. NES refused to do so unless the NRC committee received permission from each state. The NRC committee, apparently, didn't feel like doing that much work, so declared the information nonexistent. | |||
| 41 | Sheila Barron | "Although this is a topic researchers ... talk about often, very little has been written about the difficulties secondary analysts confront." p.173 | Dismissive | Difficulties associated with secondary analysis of NAEP data, chapter 9 | Grading the Nation's Report Card, National Research Council, 2000 | https://www.nap.edu/catalog/9751/grading-the-nations-report-card-research-from-the-evaluation-of | National Research Council funders | In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000), Stancavage, Et al (2002), Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004). | |||
| 42 | Sheila Barron | "...few articles have been written that specifically address the difficulties of using NAEP data." p.173 | Dismissive | Difficulties associated with secondary analysis of NAEP data, chapter 9 | Grading the Nation's Report Card, National Research Council, 2000 | https://www.nap.edu/catalog/9751/grading-the-nations-report-card-research-from-the-evaluation-of | National Research Council funders | In their 2009 Evaluation of NAEP for the US Education Department, Buckendahl, Davis, Plake, Sireci, Hambleton, Zenisky, & Wells (pp. 77–85) managed to find quite a lot of research on making comparisons between NAEP and state assessments: several of NAEP's own publications, Chromy 2005), Chromy, Ault, Black, & Mosquin (2007), McLaughlin (2000), Schuiz & Mitzel (2005), Sireci, Robin, Meara, Rogers, & Swaminathan (2000), Stancavage, Et al (2002), Stoneberg (2007), WestEd (2002), and Wise, Le, Hoffman, & Becker (2004). | |||
| 43 | Jay P. Heubert, Robert M. Hauser, Eds. | "A growing body of research suggests that tests often do in fact change school and classroom practices (Corbett & Wilson, 1991; Madaus, 1988; Herman & Golan 1993; Smith & Rottenberg, 1991)." p.29 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Rubbish. Entire books dating back a century were written on the topic, for example: C.C. Ross, Measurement in Today’s Schools, 1942; G.M. Ruch, G.D. Stoddard, Tests and Measurements in High School Instruction, 1927; C.W. Odell, Educational Measurement in High School, 1930. Other testimonies to the abundance of educational testing and empirical research on test use starting in the first half of the twentieth century can be found in Lincoln & Workman 1936, 4, 7; Butts 1947, 605; Monroe 1950, 1461; Holman & Docter 1972, 34; Tyack 1974, 183; and Lohman 1997, 88. | |||
| 44 | Jay P. Heubert, Robert M. Hauser, Eds. | "A growing body of research suggests that tests often do in fact change school and classroom practices (Corbett & Wilson, 1991; Madaus, 1988; Herman & Golan 1993; Smith & Rottenberg, 1991)." p.29 | Denigrating | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Rubbish. Entire books dating back a century were written on the topic, for example: C.C. Ross, Measurement in Today’s Schools, 1942; G.M. Ruch, G.D. Stoddard, Tests and Measurements in High School Instruction, 1927; C.W. Odell, Educational Measurement in High School, 1930. Other testimonies to the abundance of educational testing and empirical research on test use starting in the first half of the twentieth century can be found in Lincoln & Workman 1936, 4, 7; Butts 1947, 605; Monroe 1950, 1461; Holman & Docter 1972, 34; Tyack 1974, 183; and Lohman 1997, 88. | |||
| 45 | Jay P. Heubert, Robert M. Hauser, Eds. | "Most standards-based assessments have only recently been implemented or are still being developed. Consequently, it is too early to determine whether they will produce the intended effects on classroom instruction." p.36 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930). | |||
| 46 | Jay P. Heubert, Robert M. Hauser, Eds. | "A recent review of the available research evidence by Mehrens (1998) reaches several interim conclusions. Drawing on eight studies...." p.36 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Just some of the relevant pre-2008 studies of the effects of minimum-competency or exit exams and the problems with a single passing score include those of Alvarez, Moreno, & Patrinos (2007); Grodsky & Kalogrides (2006); Audette (2005); Orlich (2003); StandardsWork (2003); Meisels, et al. (2003); Braun (2003); Rosenshine (2003); Tighe, Wang, & Foley (2002); Carnoy & Loeb (2002); Baumert & Demmrich (2001); Rosenblatt & Offer (2001); Phelps (2001); Toenjes, Dworkin, Lorence, & Hill (2000); Wenglinsky (2000); Massachusetts Finance Office (2000); DeMars (2000); Bishop (1999, 2000, 2001, & 2004); Grissmer & Flanagan(1998); Strauss, Bowes, Marks, & Plesko (1998); Frederiksen (1994); Ritchie & Thorkildsen (1994); Chao-Qun & Hui (1993); Potter & Wall (1992); Jacobson (1992); Rodgers, et al. (1991); Morris (1991); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Winfield (1987); Koffler (1987); Losack (1987); Marshall (1987); Hembree (1987); Mangino, Battaille, Washington, & Rumbaut (1986); Michigan Department of Education (1984); Ketchie (1984); Serow (1982); Indiana Education Department (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); Down(2) (1979); Wellisch (1978); and Findley (1978). | |||
| 47 | Jay P. Heubert, Robert M. Hauser, Eds. | "Although there are no national data summarizing how local districts use standardized tests in certifying students, we do know that serveral of the largest school systems have begun to use test scores in determining grade-to-grade promotion (Chicago) or are considering doing so (New York City, Boston)." p.37 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Just some of the relevant pre-2008 studies of the effects of minimum-competency or exit exams and the problems with a single passing score include those of Alvarez, Moreno, & Patrinos (2007); Grodsky & Kalogrides (2006); Audette (2005); Orlich (2003); StandardsWork (2003); Meisels, et al. (2003); Braun (2003); Rosenshine (2003); Tighe, Wang, & Foley (2002); Carnoy & Loeb (2002); Baumert & Demmrich (2001); Rosenblatt & Offer (2001); Phelps (2001); Toenjes, Dworkin, Lorence, & Hill (2000); Wenglinsky (2000); Massachusetts Finance Office (2000); DeMars (2000); Bishop (1999, 2000, 2001, & 2004); Grissmer & Flanagan(1998); Strauss, Bowes, Marks, & Plesko (1998); Frederiksen (1994); Ritchie & Thorkildsen (1994); Chao-Qun & Hui (1993); Potter & Wall (1992); Jacobson (1992); Rodgers, et al. (1991); Morris (1991); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Winfield (1987); Koffler (1987); Losack (1987); Marshall (1987); Hembree (1987); Mangino, Battaille, Washington, & Rumbaut (1986); Michigan Department of Education (1984); Ketchie (1984); Serow (1982); Indiana Education Department (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); Down(2) (1979); Wellisch (1978); and Findley (1978). | |||
| 48 | Jay P. Heubert, Robert M. Hauser, Eds. | "There is very little research that specifically addresses the consequences of graduation testing." p.172 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Just some of the relevant pre-2008 studies of the effects of minimum-competency or exit exams and the problems with a single passing score include those of Alvarez, Moreno, & Patrinos (2007); Grodsky & Kalogrides (2006); Audette (2005); Orlich (2003); StandardsWork (2003); Meisels, et al. (2003); Braun (2003); Rosenshine (2003); Tighe, Wang, & Foley (2002); Carnoy & Loeb (2002); Baumert & Demmrich (2001); Rosenblatt & Offer (2001); Phelps (2001); Toenjes, Dworkin, Lorence, & Hill (2000); Wenglinsky (2000); Massachusetts Finance Office (2000); DeMars (2000); Bishop (1999, 2000, 2001, & 2004); Grissmer & Flanagan(1998); Strauss, Bowes, Marks, & Plesko (1998); Frederiksen (1994); Ritchie & Thorkildsen (1994); Chao-Qun & Hui (1993); Potter & Wall (1992); Jacobson (1992); Rodgers, et al. (1991); Morris (1991); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Winfield (1987); Koffler (1987); Losack (1987); Marshall (1987); Hembree (1987); Mangino, Battaille, Washington, & Rumbaut (1986); Michigan Department of Education (1984); Ketchie (1984); Serow (1982); Indiana Education Department (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); Down(2) (1979); Wellisch (1978); and Findley (1978). | |||
| 49 | Jay P. Heubert, Robert M. Hauser, Eds. | "Caterall adds, 'initial boasts and doubts alike regarding the effects of gatekeeping competency testing have met with a paucity of follow-up research." p.172 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Just some of the relevant pre-2008 studies of the effects of minimum-competency or exit exams and the problems with a single passing score include those of Alvarez, Moreno, & Patrinos (2007); Grodsky & Kalogrides (2006); Audette (2005); Orlich (2003); StandardsWork (2003); Meisels, et al. (2003); Braun (2003); Rosenshine (2003); Tighe, Wang, & Foley (2002); Carnoy & Loeb (2002); Baumert & Demmrich (2001); Rosenblatt & Offer (2001); Phelps (2001); Toenjes, Dworkin, Lorence, & Hill (2000); Wenglinsky (2000); Massachusetts Finance Office (2000); DeMars (2000); Bishop (1999, 2000, 2001, & 2004); Grissmer & Flanagan(1998); Strauss, Bowes, Marks, & Plesko (1998); Frederiksen (1994); Ritchie & Thorkildsen (1994); Chao-Qun & Hui (1993); Potter & Wall (1992); Jacobson (1992); Rodgers, et al. (1991); Morris (1991); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Winfield (1987); Koffler (1987); Losack (1987); Marshall (1987); Hembree (1987); Mangino, Battaille, Washington, & Rumbaut (1986); Michigan Department of Education (1984); Ketchie (1984); Serow (1982); Indiana Education Department (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); Down(2) (1979); Wellisch (1978); and Findley (1978). | |||
| 50 | Jay P. Heubert, Robert M. Hauser, Eds. | "in one of the few such studies on this topic (Bishop, 1997) compared the Third International Mathematics and Science Study (TIMSS) test scores of countries with and without rigorous graduation tests. He found that countries with demanding exit exams outperformed other countries at a comparable level of development. He concluded, however that such exams were probably not the most important determinant of achievement levels and that more research was needed." p.173 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978). | |||
| 51 | Jay P. Heubert, Robert M. Hauser, Eds. | "Very little is known about the specific consequences of passing or failing a high school graduation exam." p.176 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978). | |||
| 52 | Jay P. Heubert, Robert M. Hauser, Eds. | "American experience is limited and research is needed to explore their effectiveness. For instance, we do not know how to combine advance notice of high-stakes test requirements, remedial intervention, and opportunity to retake graduation tests." p.180 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Relevant pre-2000 studies of the effects of minimum-competency testing and the problems with a single passing score include those of Frederiksen (1994); Winfield (1990); Ligon, Johnstone, Brightman, Davis, et al. (1990); Losack (1987); Mangino & Babcock (1986); Serow (1982); Brunton (1982); Paramore, et al. (1980); Ogden (1979); and Findley (1978). | |||
| 53 | Jay P. Heubert, Robert M. Hauser, Eds. | "Research is also needed to explore the effects of different kinds of high school credentials on employment and other post-school outcomes." p.180 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | ||||
| 54 | Jay P. Heubert, Robert M. Hauser, Eds. | "At the same time, solid evaluation research on the most effective remedial approaches is sparse." p.183 | Denigrating | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Developmental (i.e., remedial) education researchers have conducted many studies to determine what works best to keep students from failing in their “courses of last resort,” after which there are no alternatives. Researchers have included Boylan, Roueche, McCabe, Wheeler, Kulik, Bonham, Claxton, Bliss, Schonecker, Chen, Chang, and Kirk. | |||
| 55 | Jay P. Heubert, Robert M. Hauser, Eds. | "There is plainly a need for good research on effective remedial eduation." p.183 | Denigrating | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Developmental (i.e., remedial) education researchers have conducted many studies to determine what works best to keep students from failing in their “courses of last resort,” after which there are no alternatives. Researchers have included Boylan, Roueche, McCabe, Wheeler, Kulik, Bonham, Claxton, Bliss, Schonecker, Chen, Chang, and Kirk. | |||
| 56 | Jay P. Heubert, Robert M. Hauser, Eds. | "However, in most of the nation, much needs to be done before a world-class curriculum and world-class instruction will be in place." p.277 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | ||||
| 57 | Jay P. Heubert, Robert M. Hauser, Eds. | "The committee sees a strong need for better evidence on the benefits and costs of high-stakes testing." p.281 | Denigrating | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | No. See, for example, Phelps, R.P. (2000, Winter). Estimating the cost of systemwide student testing in the United States. Journal of Education Finance, 25(3) 343–380; Danitz, T. (2001, February 27). Special report: States pay $400 million for tests in 2001. Stateline.org. Pew Center for the States; Hoxby, C.M. (2002). The cost of accountability, in W. M Evers & H.J. Walberg (Eds.), School Accountability, Stanford, CA: Hoover Institution Press; U.S. GAO. (1993, January). Student testing: Current extent and expenditures, with cost estimates for a national examination. GAO/PEMD-93-8. Washington, DC: US General Accounting Office; Phelps, R.P. (1998). Benefit-cost analysis of systemwide student testing, Paper presented at the annual meeting of the American Education Finance Association, Mobile, AL. | |||
| 58 | Jay P. Heubert, Robert M. Hauser, Eds. | "Very little is known about the specific consequences of passing or failing a high school graduation exam." p.288 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above. Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsack, Weerasinghe, and Bembry | |||
| 59 | Jay P. Heubert, Robert M. Hauser, Eds. | "At present, however, advanced skills are often not well defined and ways of assessing them are not well established." p.289 | Denigrating | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. | |||
| 60 | Jay P. Heubert, Robert M. Hauser, Eds. | "...in many cases, the demands that full participation of these students [i.e., students with disabilities] place on assessment systems are greater than current assessment knowledge and technology can support." p.191 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. | |||
| 61 | Jay P. Heubert, Robert M. Hauser, Eds. | "...available evidence about the possible effects of graduation tests on learning and on high school dropout is inconclusive (e.g., Kreitzer et al., 1989, Reardon, 1996; Catterall, 1990; Cawthorne, 1990; Bishop, 1997). | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above. Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsack, Weerasinghe, and Bembry | |||
| 62 | Jay P. Heubert, Robert M. Hauser, Eds. | "We do not know how to combine advance notice of high-stakes test requirements, remedial intervention, and opportunity to retake graduation tests. Research is also needed to explore the effects of different kinds of high school credentials on employment and other post-school outcomes." p.289 | Dismissive | High Stakes: Testing for Tracking, Promotion, and Graduation | Board on Testing and Assessment, National Research Council, 1999 | https://www.nap.edu/catalog/6336/high-stakes-testing-for-tracking-promotion-and-graduation | Ford Foundation | The many studies of district and state minimum competency or diploma testing programs popular from the 1960s through the 1980s found positive effects for students just below the cut score and mixed effects for students far below and anywhere above. Researchers have included Fincher, Jackson, Battiste, Corcoran, Jacobsen, Tanner, Boylan, Saxon, Anderson, Muir, Bateson, Blackmore, Rogers, Zigarelli, Schafer, Hultgren, Hawley, Abrams, Seubert, Mazzoni, Brookhart, Mendro, Herrick, Webster, Orsack, Weerasinghe, and Bembry | |||
| 63 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "But the practical nature of our charge and the limits of the evidence available to us have meant that we have also had to draw on the practical experience of committee members and outside experts in crafting our advice. Hence, this report relies heavily on expert advice from the field, in addition to scientific research." p. vii | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930). | |||
| 64 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "we reviewed available evidence from research on assessment, accountability, and standards-based reform. However, we recognized that in many areas the evidentiary base was slim." p.11 | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930). | |||
| 65 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "Standards-based reform is a new idea, and few places have put all the pieces in place, and even fewer have put them in place long enough to enable scholars to observe their effects." p.11 | 1stness | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930). | |||
| 66 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "Yet despite the prominence of standards-based reform in the policy debate, there are few examples of districts or states that have put the entire standards-based puzzle together, much less achieved success through it. Some evidence is beginning to gather." p.16 | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930). | |||
| 67 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "In large part, the limited body of evidence in this country reflects the complexity of the concept." p.16 | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Relevant pre-2000 studies of the effects of standards, alignment, goal setting, setting reachable goals, etc. include those of Mitchell (1999); Morgan & Ramist (1998); the *Southern Regional Education Board (1998); Miles, Bishop, Collins, Fink, Gardner, Grant, Hussain, et al. (1997); the Florida Office of Program Policy Analysis (1997); Pomplun (1997); Schmoker (1996); Aguilera & Hendricks (1996); Banta, Lund, Black & Oblander (1996); Bottoms & Mikos (1995); *Bamburg & Medina (1993); Bishop (1993); the U. S. General Accounting Office (1993); Eckstein & Noah (1993); Mattsson (1993); Brown (1992); Heyneman & Ransom (1992); Whetton (1992); Anderson, Muir, Bateson, Blackmore & Rogers (1990); Csikszentmihalyi (1990); *Levine & Lezotte (1990); LaRoque & Coleman (1989); Hillocks (1987); Willingham & Morris (1986); Resnick & Resnick (1985); Ogle & Fritts (1984); *Natriello & Dornbusch (1984); Brooke & Oxenham (1984); Rentz (1979); Wellisch, MacQueen, Carriere & Dick (1978); *Rosswork (1977); Estes, Colvin & Goodwin (1976); Wood (1953); and Panlasigui & Knight (1930). | |||
| 68 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "Despite the common use of such accommodations, however, there is little research on their effects on the validity of test score information, and most of the research has examined college admission tests and other postsecondary measures, not achievement tests in elementary and secondary schools (National Research Council, 1997a)." p.57 | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Relevant studies include: Forte Fast, E., & the Accountability Systems and Reporting State Collaborative on Assessment and Student Standards. (2002). A guide to effective accountability reporting. Washington, DC: Council of Chief State School Officers. * Goodman, D., & Hambleton, R.K. (2005). Some misconceptions about large-scale educational assessments, Chapter 4 in Richard P Phelps (Ed.) Defending Standardized Testing, Psychology Press. * Goodman, D. P., & Hambleton (2004). Student test score reports and interpretive guides: Review of current practices and suggestions for future research. Applied Measurement in Education. * Hambleton, R. K. (2002). How can we make NAEP and state test score reporting scales and reports more understandable? In R. W. Lissitz & W. D. Schafer (Eds.), Assessment in educational reform (pp. 192-205). Boston: Allyn & Bacon. * Impara, J. C., Divine, K. P., Bruce, F. A., Liverman, M. R., & Gay, A. (1991). Does interpretive test score information help teachers? Educational Measurement: Issues and Practice, 10(4), 16-18. * Wainer, H., Hambleton, R. K., & Meara, K. (1999). Alternative displays for communicating NAEP results: A redesign and validity study. Journal of Educational Measurement, 36(4), 301-335. | |||
| 69 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "Because of the paucity of research, questions remain about whether test results from assessments using accommodations represent valid and reliable indicators of what students with disabilities know and are able to do (Koretz, 1997)." p.57 | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. | |||
| 70 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "As with accommodations for students with disabilities, the research on the effects of test accommodations for English-language learners is inconclusive." p.62 | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. | |||
| 71 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "The small body of research that has examined classrooms in depth suggests that such instructional practices may be rare, even among teachers who say they endorse the changes the standards are intended to foster." p.75 | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. | |||
| 72 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "Districts' capacity to monitor the conditions of instruction in schools is limited, and there are few examples of districts that have been shown to be effective in analyzing such conditions and using the data to improve instruction." p.76 | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. | |||
| 73 | Richard F. Elmore, Robert Rothman, Eds. | Eva L. Baker, Lauren B. Resnick, Robert L. Linn, Lorraine McDonnel, Lauress L. Wise, Michael Feuer, et al. | "The research base on such efforts is slim, in large part because there are so few examples to study." p.76 | Dismissive | Testing, Teaching, and Learning: A Guide forStates and School Districts, 1999 | Committee on Title I Testing and Assessment, Board on Testing and Assessment, National Research Council | "The study was supported by The Pew Charitable Trusts (award 96000217-000), The Spencer Foundation (award 199700156), The William T. Grant Foundation (award 97179797), and the U.S. Department of Education (award R305U960001)" | Difficult to believe given that the federal government has for decades generously funded research into testing students with disabilities. See, for example, https://nceo.info/ and Kurt Geisinger's and Janet Carlson's chapters in Defending Standardized Testing and Correcting Fallacies in Educational and Psychological Testing. | |||
| 74 | Hartigan, J. A., & Wigdor, A. K. | "The empirical evidence cited for the standard deviation of worker productivity is quite slight." p.239 | Dismissive | Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. | Washington, DC: National Academy Press, 1989 | https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the | National Research Council funders | See, for example, The National Research Council’s Testing Expertise, https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc | |||
| 75 | Hartigan, J. A., & Wigdor, A. K. | "Some fragmentary confirming evidence that supports this point of view can be found in Hunter et al. (1988)... We regard the Hunter and Schmidt assumption as plausible but note that there is very little evidence about the nature of the relationship of ability to output." p.243 | Dismissive | Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. | Washington, DC: National Academy Press, 1989 | https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the | National Research Council funders | See, for example, The National Research Council’s Testing Expertise, https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc | |||
| 76 | Hartigan, J. A., & Wigdor, A. K. | "It is also important to remember that the most important assumptions of the Hunter-Schmidt models rest on a very slim empirical foundation .... Hunter and Schmidt's economy-wide models are based on simple assumptions for which the empirical evidence is slight." p.245 | Dismissive, Denigrating | Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. | Washington, DC: National Academy Press, 1989 | https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the | National Research Council funders | See, for example, The National Research Council’s Testing Expertise, https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc | |||
| 77 | Hartigan, J. A., & Wigdor, A. K. | "It is important to remember that the most important assumptions of the Hunter-Schmidt models rest on a very slim empirical foundation." p.245 | Dismissive, Denigrating | Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. | Washington, DC: National Academy Press, 1989 | https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the | National Research Council funders | See, for example, The National Research Council’s Testing Expertise, https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc | |||
| 78 | Hartigan, J. A., & Wigdor, A. K. | "Hunter and Schmidt's economy wide models are based on simple assumptions for which the empirical evidence is slight." p.245 | Dismissive, Denigrating | Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. | Washington, DC: National Academy Press, 1989 | https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the | National Research Council funders | See, for example, The National Research Council’s Testing Expertise, https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc | |||
| 79 | Hartigan, J. A., & Wigdor, A. K. | "That assumption is supported by only a very few studies." p.245 | Dismissive, Denigrating | Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. | Washington, DC: National Academy Press, 1989 | https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the | National Research Council funders | See, for example, The National Research Council’s Testing Expertise, https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc | |||
| 80 | Hartigan, J. A., & Wigdor, A. K. | "There is no well-developed body of evidence from which to estimate the aggregate effects of better personnel selection...we have seen no empirical evidence that any of them provide an adequate basis for estimating the aggregate economic effects of implementing the VG-GATB on a nationwide basis." p.247 | Dismissive, Denigrating | Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. | Washington, DC: National Academy Press, 1989 | https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the | National Research Council funders | See, for example, The National Research Council’s Testing Expertise, https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc | |||
| 81 | Hartigan, J. A., & Wigdor, A. K. | "Furthermore, given the state of scientific knowledge, we do not believe that realistic dollar estimates of aggregate gains from improved selection are even possible." p.248 | Dismissive | Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. | Washington, DC: National Academy Press, 1989 | https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the | National Research Council funders | See, for example, The National Research Council’s Testing Expertise, https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc | |||
| 82 | Hartigan, J. A., & Wigdor, A. K. | "...primitive state of knowledge..." p.248 | Denigrating | Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. | Washington, DC: National Academy Press, 1989 | https://www.nap.edu/catalog/1338/fairness-in-employment-testing-validity-generalization-minority-issues-and-the | National Research Council funders | See, for example, The National Research Council’s Testing Expertise, https://www.apa.org/pubs/books/supplemental/correcting-fallacies-educational-psychological-testing/Phelps Web Appendix D new.doc | |||
| IRONIES: | |||||||||||
| Michael J. Feuer | "It is our way of reminding ourselves, and others, that we hold to high evidentiary standards when it comes to programs or policies that affect the lives of people or the workings of organizations." p.98 | Past as Prologue: The National Academy of Education at 50 | National Academy of Education funders | ||||||||
| Michael J. Feuer | "Other societies have tried to suppress science when it interferes with politics or religion." p.97 | Past as Prologue: The National Academy of Education at 50 | National Academy of Education funders | ||||||||
| Michael J. Feuer | "We invite and pay for an extraordinary amount of certifiably expert input to feed our apparently insatiable appetite for data." p.98 | Past as Prologue: The National Academy of Education at 50 | National Academy of Education funders | ||||||||
| Michael J. Feuer | "… one advantage of the Academy depends on keeping evidence ahead of advocacy—even if we are not sure how to define evidence and appreciate the passions that bring people to this work in the first place." p.98 | Past as Prologue: The National Academy of Education at 50 | National Academy of Education funders | ||||||||
| Michael J. Feuer | "… NRC report review, which is often the butt of humor because it appears to privilege rigor over relevance;" p.99 | Past as Prologue: The National Academy of Education at 50 | National Academy of Education funders | ||||||||
| Michael J. Feuer | "To challenge authority is to hold authority accountable. Challenging people in power requires them to show that what they are doing is legitimate; we invite them to rise to the challenge and prove their case; and they, in turn, trust that the system will treat them fairly." | Measuring Accountability When Trust Is Conditional | Education Week, September 24, 2012 | https://www.edweek.org/ew/articles/2012/09/24/05feuer_ep.h32.html?print=1 | |||||||
| Michael J. Feuer | "No profession is granted automatic autonomy or an exemption from evaluation." | Measuring Accountability When Trust Is Conditional | Education Week, September 24, 2012 | https://www.edweek.org/ew/articles/2012/09/24/05feuer_ep.h32.html?print=1 | |||||||
| Richard J. Shavelson & Lisa Towne, Eds. | "Rarely does one study produce an unequivocal and durable result; multiple methods, applied over time and tied to evidentiary standards, are essential to establishing a base of scientific knowledge." p.2 | Executive Summary, Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | ||||||||
| Richard J. Shavelson & Lisa Towne, Eds. | "Formal syntheses of research findings across studies are often necessary to discover, test, and explain the diversity of findings that characterize many fields." p.2 | Executive Summary, Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | ||||||||
| Richard J. Shavelson & Lisa Towne, Eds. | "Scientific inquiry emphasizes checking and validating individual findings and results. Since all studies rely on a limited set of observations, a key question is how individual findings generalize to broader populations and settings. Ultimately, scientific knowledge advances when findings are reproduced in a range of times and places and when findings are integrated and synthesized." p.4 | Executive Summary, Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | ||||||||
| Richard J. Shavelson & Lisa Towne, Eds. | "Scientific studies do not contribute to a larger body of knowledge until they are widely disseminated and subjected to professional scrutiny by peers. This ongoing, collaborative, public critique is an indication of the health of a scientific enterprise. Indeed, the objectivity of science derives from publicly enforced norms of the professional community of scientists, rather than from the character traits of any individual person or design features of any study." p.5 | Executive Summary, Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | ||||||||
| Richard J. Shavelson & Lisa Towne, Eds. | "To make progress possible, ... the community of inquirers must be, in Karl Popper’s expression, “open societies” that encourage the free flow of critical comment. Researchers have an obligation to avoid seeking only such evidence that apparently supports their favored hypotheses; they also must seek evidence that is incompatible with these hypotheses even if such evidence, when found, would refute their ideas." pp.18-19 | Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | ||||||||
| Richard J. Shavelson & Lisa Towne, Eds. | "A second characteristic of knowledge accumulation is that it is contested. Scientists are trained and employed to be skeptical observers, to ask critical questions, and to challenge knowledge claims in constructive dialogue with their peers. … it is essentially these norms of the scientific community engaging in such professional critique of each other’s work that enables scientific consensus and extends the boundaries of what is known." p.46 | Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | ||||||||
| Richard J. Shavelson & Lisa Towne, Eds. | " ... a characteristic of scientific knowledge accumulation is its contested nature. Here we suggest that science is not only characterized by professional scrutiny and criticism, but also that such criticism is essential to scientific progress. Scientific studies usually are elements of a larger corpus of work; furthermore, the scientists carrying out a particular study always are part of a larger community of scholars. Reporting and reviewing research results are essential to enable wide and meaningful peer review. p.72 | Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | ||||||||
| Richard J. Shavelson & Lisa Towne, Eds. | "… the goals of research reporting are to communicate the findings from the investigation; to open the study to examination, criticism, review, and replication." p.72 | Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | ||||||||
| Richard J. Shavelson & Lisa Towne, Eds. | "Quite the contrary: intellectual debate at professional meetings, through research collaborations, and in other settings provide the means by which scientific knowledge is refined and accepted; scientists strive for an “open society” where criticism and unfettered debate point the way to advancement." p.73 | Scientific Research in Education, (2002) | Committee on Scientific Principles for Education Research, National Research Council | ||||||||
| Author cites (and accepts as fact without checking) someone elses dismissive review | |||||||||||
| Cite selves or colleagues in the group, but dismiss or denigrate all other work | |||||||||||
| Falsely claim that research has only recently been done on topic. | |||||||||||