{"id":283,"date":"2016-02-12T10:06:06","date_gmt":"2016-02-12T15:06:06","guid":{"rendered":"http:\/\/nonpartisaneducation.org\/blog1\/?p=283"},"modified":"2025-12-12T23:04:28","modified_gmt":"2025-12-13T04:04:28","slug":"fordham-institutes-pretend-research","status":"publish","type":"post","link":"https:\/\/nonpartisaneducation.org\/blog1\/2016\/02\/fordham-institutes-pretend-research\/","title":{"rendered":"Fordham Institute\u2019s pretend research"},"content":{"rendered":"<p>The Thomas B. Fordham Institute has released a report, <em>Evaluating the Content and Quality of Next Generation Assessments<\/em>,<a href=\"#_edn1\" name=\"_ednref1\">[i]<\/a> ostensibly an evaluative comparison of four testing programs, the Common Core-derived SBAC and PARCC, ACT\u2019s Aspire, and the Commonwealth of Massachusetts\u2019 MCAS.<a href=\"#_edn2\" name=\"_ednref2\">[ii]<\/a> Of course, anyone familiar with Fordham\u2019s past work knew beforehand which tests would win.<\/p>\n<p>This latest Fordham Institute Common Core apologia is not so much research as a caricature of it.<\/p>\n<ol>\n<li>Instead of referencing a wide range of relevant research, Fordham references only friends from inside their echo chamber and others paid by the Common Core\u2019s wealthy benefactors. But, they imply that they have covered a relevant and adequately wide range of sources.<\/li>\n<li>Instead of evaluating tests according to the industry standard <em>Standards for Educational and Psychological Testing<\/em>, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employ \u201ca brand new methodology\u201d specifically developed for Common Core, for the owners of the Common Core, and paid for by Common Core\u2019s funders.<\/li>\n<li>Instead of suggesting as fact only that which has been rigorously evaluated and accepted as fact by skeptics, the authors continue the practice of Common Core salespeople of attributing benefits to their tests for which no evidence exists<\/li>\n<li>Instead of addressing any of the many sincere, profound critiques of their work, as confident and responsible researchers would do, the Fordham authors tell their critics to go away\u2014\u201cIf you don\u2019t care for the standards\u2026you should probably ignore this study\u201d (p. 4).<\/li>\n<li>Instead of writing in neutral language as real researchers do, the authors adopt the practice of coloring their language as so many Common Core salespeople do, attaching nice-sounding adjectives and adverbs to what serves their interest, and bad-sounding words to what does not.<\/li>\n<\/ol>\n<p><strong>1.\u00a0<\/strong> Common Core\u2019s primary private financier, the Bill &amp; Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the Core and its associated testing programs.<a href=\"#_edn3\" name=\"_ednref3\">[iii]<\/a> A cursory search through the Gates Foundation web site reveals $3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or \u201cgeneral operating support.\u201d<a href=\"#_edn4\" name=\"_ednref4\">[iv]<\/a> Gates awarded an additional $653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio ($2,596,812), reputedly the nation\u2019s worst.<a href=\"#_edn5\" name=\"_ednref5\">[v]<\/a><\/p>\n<p style=\"text-align: left;\">The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:<\/p>\n<ul>\n<li>the Human Resources Research Organization (HumRRO),<a href=\"#_edn6\" name=\"_ednref6\">[vi]<\/a><\/li>\n<li>the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the test evaluation \u201cCriteria.\u201d<a href=\"#_edn7\" name=\"_ednref7\">[vii]<\/a><\/li>\n<li>the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of one of the federally-subsidized Common Core-aligned testing programs, the Smarter-Balanced Assessment Consortium (SBAC),<a href=\"#_edn8\" name=\"_ednref8\">[viii]<\/a> and<\/li>\n<li>Student Achievement Partners, the organization that claims to have inspired the Common Core standards<a href=\"#_edn9\" name=\"_ednref9\">[ix]<\/a><\/li>\n<\/ul>\n<p>The Common Core\u2019s <em>grandees<\/em> have always only hired their own well-subsidized <em>grantees<\/em> for evaluations of their products. The Buros Center for Testing at the University of Nebraska has conducted test reviews for decades, publishing many of them in its annual <em>Mental Measurements Yearbook <\/em>for the entire world to see, and critique. Indeed, Buros exists to conduct test reviews, and retains hundreds of the world\u2019s brightest and most independent psychometricians on its reviewer roster. Why did Common Core\u2019s funders not hire genuine professionals from Buros to evaluate PARCC and SBAC? The non-psychometricians at the Fordham Institute would seem a vastly inferior substitute, \u2026that is, had the purpose genuinely been an objective evaluation.<\/p>\n<p><strong>2.<\/strong>\u00a0 A second reason Fordham\u2019s intentions are suspect rests with their choice of evaluation criteria. The \u201cbible\u201d of North American testing experts is the <em>Standards for Educational and Psychological Testing<\/em>, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.<a href=\"#_edn10\" name=\"_ednref10\">[x]<\/a><\/p>\n<p>Had Fordham compared the tests using the <em>Standards for Educational and Psychological Testing<\/em> (or any of a number of other widely-respected test evaluation standards, guidelines, or protocols<a href=\"#_edn11\" name=\"_ednref11\">[xi]<\/a>) SBAC and PARCC would have flunked. They have yet to accumulate some the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest they will fail on all three counts.<a href=\"#_edn12\" name=\"_ednref12\">[xii]<\/a><\/p>\n<p>Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-owns the Common Core standards and co-sponsored their development (Council of Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond\u2019s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others.<a href=\"#_edn13\" name=\"_ednref13\">[xiii]<\/a><sup>,<\/sup><a href=\"#_edn14\" name=\"_ednref14\">[xiv]<\/a> Thus, Fordham compares SBAC and PARCC to other tests according to specifications that were designed for SBAC and PARCC.<a href=\"#_edn15\" name=\"_ednref15\">[xv]<\/a><\/p>\n<p>The authors write \u201cThe quality and credibility of an evaluation of this type rests largely on the expertise and judgment of the individuals serving on the review panels\u201d (p.12). A scan of the names of everyone in decision-making roles, however, reveals that Fordham relied on those they have hired before and whose decisions they could safely predict. Regardless, given the evaluation criteria employed, the outcome was foreordained regardless whom they hired to review, not unlike a rigged election in a dictatorship where voters\u2019 decisions are restricted to already-chosen candidates.<\/p>\n<p>Still, PARCC and SBAC might have flunked even if Fordham had compared tests using all 24+ of CCSSO\u2019s \u201cCriteria.\u201d But Fordham chose to compare on only 14 of the criteria.<a href=\"#_edn16\" name=\"_ednref16\">[xvi]<\/a> And those just happened to be criteria mostly favoring PARCC and SBAC.<\/p>\n<p>Without exception the Fordham study avoided all the evaluation criteria in the categories:<\/p>\n<p style=\"padding-left: 30px;\">\u201cMeet overall assessment goals and ensure technical quality\u201d,<\/p>\n<p style=\"padding-left: 30px;\">\u201cYield valuable reports on student progress and performance\u201d,<\/p>\n<p style=\"padding-left: 30px;\">\u201cAdhere to best practices in test administration\u201d, and<\/p>\n<p style=\"padding-left: 30px;\">\u201cState specific criteria\u201d<a href=\"#_edn17\" name=\"_ednref17\">[xvii]<\/a><\/p>\n<p>What types of test characteristics can be found in these neglected categories? Test security, providing timely data to inform instruction, validity, reliability, score comparability across years, transparency of test design, requiring involvement of each state\u2019s K-12 educators and institutions of higher education, and more. Other characteristics often claimed for PARCC and SBAC, without evidence, cannot even be found in the CCSSO criteria (e.g., internationally benchmarked, backward mapping from higher education standards, fairness).<\/p>\n<p>The report does not evaluate the \u201cquality\u201d of tests, as its title suggests; at best it is an alignment study. And, naturally, one would expect the Common Core consortium tests to be more aligned to the Common Core than other tests. The only evaluative criteria used from the CCSSO\u2019s Criteria are in the two categories \u201cAlign to Standards\u2014English Language Arts\u201d and \u201cAlign to Standards\u2014Mathematics\u201d and, even then, only for grades 5 and 8.<\/p>\n<p>Nonetheless, the authors claim, \u201cThe methodology used in this study is highly comprehensive\u201d (p. 74).<\/p>\n<p>The authors of the Pioneer Institute\u2019s report <em>How PARCC\u2019s false rigor stunts the academic growth of all students,<\/em><a href=\"#_edn18\" name=\"_ednref18\">[xviii]<\/a> recommended strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also did not recommend continuing with the current MCAS, which is also based on Common Core\u2019s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state\u2019s mandated tests to a more responsible agency.<\/p>\n<p>Perhaps the primary conceit of Common Core proponents is that the familiar multiple-choice\/short answer\/essay standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)<a href=\"#_edn19\" name=\"_ednref19\">[xix]<\/a>. Ironically, it is they\u2014opponents of traditional testing content and formats\u2014who propose that standardized tests <em>measure everything<\/em>. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.<\/p>\n<p>Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:<\/p>\n<p style=\"padding-left: 30px;\">\u201dResearch: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.\u201d<a href=\"#_edn20\" name=\"_ednref20\">[xx]<\/a><\/p>\n<p>Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure \u201csustained research\u201d in the one- or two-minute span of a standardized test question? In PARCC tests, this is simulated by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.<\/p>\n<p>But, that is not how research works. It is hardly the type of deliberation that comes to most people\u2019s mind when they think about \u201csustained research\u201d. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; \u201csustained research\u201d should be measured more authentically.<\/p>\n<p>The authors of the aforementioned Pioneer Institute report recommend, as their 7<sup>th<\/sup> policy recommendation for Massachusetts:<\/p>\n<p style=\"padding-left: 30px;\">\u201cEstablish a junior\/senior-year interdisciplinary research paper requirement as part of the state\u2019s graduation requirements\u2014to be assessed at the local level following state guidelines\u2014to prepare all students for authentic college writing.\u201d<a href=\"#_edn21\" name=\"_ednref21\">[xxi]<\/a><\/p>\n<p>PARCC, SBAC, and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two.<a href=\"#_edn22\" name=\"_ednref22\"><\/a> It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC and SBAC tests \u201cdeeper\u201d than others. In practice, the alleged deeper parts are the most convoluted and superficial.<\/p>\n<p>Appendix A of the source document for the CCSSO criteria provides three international examples of \u201chigh-quality assessments\u201d in Singapore, Australia, and England.<a href=\"#_edn23\" name=\"_ednref23\">[xxiii]<\/a> None are standardized test components. Rather, all are projects developed over extended periods of time\u2014weeks or months\u2014as part of regular course requirements.<\/p>\n<p>Common Core proponents scoured the globe to locate \u201cinternational benchmark\u201d examples of the type of convoluted (i.e., \u201chigher\u201d, \u201cdeeper\u201d) test questions included in PARCC and SBAC tests. They found none.<\/p>\n<p><strong>3.<\/strong>\u00a0 The authors continue the Common Core sales tendency of attributing benefits to their tests for which no evidence exists. For example, the Fordham report claims that SBAC and PARCC will:<\/p>\n<p style=\"padding-left: 30px;\">\u201cmake traditional \u2018test prep\u2019 ineffective\u201d (p. 8)<\/p>\n<p style=\"padding-left: 30px;\">\u201callow students of all abilities, including both at-risk and high-achieving youngsters, to demonstrate what they know and can do\u201d (p. 8)<\/p>\n<p style=\"padding-left: 30px;\">produce \u201ctest scores that more accurately predict students\u2019 readiness for entry-level coursework or training\u201d (p. 11)<\/p>\n<p style=\"padding-left: 30px;\">\u201creliably measure the essential skills and knowledge needed \u2026 to achieve college and career readiness by the end of high school\u201d (p. 11)<\/p>\n<p style=\"padding-left: 30px;\">\u201c\u2026accurately measure student progress toward college and career readiness; and provide valid data to inform teaching and learning.\u201d (p. 3)<\/p>\n<p style=\"padding-left: 30px;\">eliminate the problem of \u201cstudents \u2026 forced to waste time and money on remedial coursework.\u201d (p. 73)<\/p>\n<p style=\"padding-left: 30px;\">help \u201ceducators [who] need and deserve good tests that honor their hard work and give useful feedback, which enables them to improve their craft and boost their students\u2019 success.\u201d (p. 73)<\/p>\n<p>The Fordham Institute has not a shred of evidence to support any of these grandiose claims. They share more in common with carnival fortune telling than empirical research. Granted, most of the statements refer to future outcomes, which cannot be known with certainty. But, that just affirms how irresponsible it is to make such claims absent any evidence.<\/p>\n<p>Furthermore, in most cases, past experience would suggest just the opposite of what Fordham asserts. Test prep is more, not less, likely to be effective with SBAC and PARCC tests because the test item formats are complex (or, convoluted), introducing more \u201cconstruct irrelevant variance\u201d\u2014that is, students will get lower scores for not managing to figure out formats or computer operations issues, even if they know the subject matter of the test. Disadvantaged and at-risk students tend to be the most disadvantaged by complex formatting and new technology.<\/p>\n<p>As for Common Core, SBAC, and PARCC eliminating the \u201cproblem of\u201d college remedial courses, such will be done by simply cancelling remedial courses, whether or not they might be needed, and lowering college entry-course standards to the level of current remedial courses.<\/p>\n<p><strong>4.<\/strong>\u00a0 When not dismissing or denigrating SBAC and PARCC critiques, the Fordham report evades them, even suggesting that critics should not read it: \u201cIf you don\u2019t care for the standards\u2026you should probably ignore this study\u201d (p. 4).<\/p>\n<p>Yet, cynically, in the very first paragraph the authors invoke the name of Sandy Stotsky, one of their most prominent adversaries, and a scholar of curriculum and instruction so widely respected she could easily have gotten wealthy had she chosen to succumb to the financial temptation of the Common Core\u2019s profligacy as so many others have. Stotsky authored the Fordham Institute\u2019s \u201cvery first study\u201d in 1997, apparently. Presumably, the authors of this report drop her name to suggest that they are broad-minded. (It might also suggest that they are now willing to publish anything for a price.)<\/p>\n<p>Tellingly, one will find Stotsky\u2019s name nowhere after the first paragraph. None of her (or anyone else\u2019s) many devastating critiques of the Common Core tests is either mentioned or referenced. Genuine research does not hide or dismiss its critiques; it addresses them.<\/p>\n<p>Ironically, the authors write, \u201cA discussion of [test] qualities, and the types of trade-offs involved in obtaining them, are precisely the kinds of conversations that merit honest debate.\u201d Indeed.<\/p>\n<p><strong>5.<\/strong>\u00a0 Instead of writing in neutral language as real researchers do, the authors adopt the habit of coloring their language as Common Core salespeople do. They attach nice-sounding adjectives and adverbs to what they like, and bad-sounding words to what they don\u2019t.<\/p>\n<p>For PARCC and SBAC one reads:<\/p>\n<p style=\"padding-left: 30px;\">\u201cstrong content, quality, and rigor\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201cstronger tests, which encourage better, broader, richer instruction\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201ctests that focus on the essential skills and give clear signals\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201cmajor improvements over the previous generation of state tests\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201ccomplex skills they are assessing.\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201chigh-quality assessment\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201chigh-quality assessments\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201chigh-quality tests\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201chigh-quality test items\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201chigh quality and provide meaningful information\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201ccarefully-crafted tests\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201cthese tests are tougher\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201cmore rigorous tests that challenge students more than they have been challenged in the past\u201d<\/p>\n<p>For other tests one reads:<\/p>\n<p style=\"padding-left: 30px;\">\u201clow-quality assessments poorly aligned with the standards\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201cwill undermine the content messages of the standards\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201ca best-in-class state assessment, the 2014 MCAS, does not measure many of the important competencies that are part of today\u2019s college and career readiness standards\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201chave generally focused on low-level skills\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201chave given students and parents false signals about the readiness of their children for postsecondary education and the workforce\u201d<\/p>\n<p>Appraising its own work, Fordham writes:<\/p>\n<p style=\"padding-left: 30px;\">\u201cgroundbreaking evaluation\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201cmeticulously assembled panels\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201chighly qualified yet impartial reviewers\u201d<\/p>\n<p>Considering those who have adopted SBAC or PARCC, Fordham writes:<\/p>\n<p style=\"padding-left: 30px;\">\u201cthankfully, states have taken courageous steps\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201cstates\u2019 adoption of college and career readiness standards has been a bold step in the right direction.\u201d<\/p>\n<p style=\"padding-left: 30px;\">\u201cadopting and sticking with high-quality assessments requires courage.\u201d<\/p>\n<p>A few other points bear mentioning. The Fordham Institute was granted access to operational SBAC and PARCC test items. Over the course of a few months in 2015, the Pioneer Institute, a strong critic of Common Core, PARCC, and SBAC, appealed for similar access to PARCC items. The convoluted run-around responses from PARCC officials excelled at bureaucratic stonewalling. Despite numerous requests, Pioneer never received access.<\/p>\n<p>The Fordham report claims that PARCC and SBAC are governed by \u201cmember states\u201d, whereas ACT Aspire is owned by a private organization. Actually, the Common Core Standards are owned by two private, unelected organizations, the Council of Chief State School Officers and the National Governors\u2019 Association, and only each state\u2019s chief school officer sits on PARCC and SBAC panels. Individual states actually have far more say-so if they adopt ACT Aspire (or their own test) than if they adopt PARCC or SBAC. A state adopts ACT Aspire under the terms of a negotiated, time-limited contract. By contrast, a state or, rather, its chief state school officer, has but one vote among many around the tables at PARCC and SBAC. With ACT Aspire, a state controls the terms of the relationship. With SBAC and PARCC, it does not.<a href=\"#_edn24\" name=\"_ednref24\">[xxiv]<\/a><\/p>\n<p>Just so you know, on page 71, Fordham recommends that states eliminate any tests that are not aligned to the Common Core Standards, in the interest of efficiency, supposedly.<\/p>\n<p>In closing, it is only fair to mention the good news in the Fordham report. It promises on page 8, \u201cWe at Fordham don\u2019t plan to stay in the test-evaluation business\u201d.<\/p>\n<p><a href=\"#_ednref1\" name=\"_edn1\">[i]<\/a> Nancy Doorey &amp; Morgan Polikoff. (2016, February). <em>Evaluating the content and quality of next generation assessments<\/em>. With a Foreword by Amber M. Northern &amp; Michael J. Petrilli. Washington, DC: Thomas P. Fordham Institute. <a href=\"http:\/\/edexcellence.net\/publications\/evaluating-the-content-and-quality-of-next-generation-assessments\">https:\/\/edexcellence.net\/publications\/evaluating-the-content-and-quality-of-next-generation-assessments<\/a><\/p>\n<p><a href=\"#_ednref2\" name=\"_edn2\">[ii]<\/a> PARCC is the Partnership for Assessment of Readiness for College and Careers; SBAC is the Smarter-Balanced Assessment Consortium; MCAS is the Massachusetts Comprehensive Assessment System; ACT Aspire is not an acronym (though, originally ACT stood for American College Test).<\/p>\n<p><a href=\"#_ednref3\" name=\"_edn3\">[iii]<\/a> The reason for inventing a Fordham Institute when a Fordham Foundation already existed may have had something to do with taxes, but it also allows Chester Finn, Jr. and Michael Petrilli to each pay themselves two six figure salaries instead of just one.<\/p>\n<p><a href=\"#_ednref4\" name=\"_edn4\">[iv]<\/a> <a href=\"http:\/\/www.gatesfoundation.org\/search#q\/k=Fordham\">https:\/\/www.gatesfoundation.org\/search#q\/k=Fordham<\/a><\/p>\n<p><a href=\"#_ednref5\" name=\"_edn5\">[v]<\/a> See, for example, <a href=\"http:\/\/www.ohio.com\/news\/local\/charter-schools-misspend-millions-of-ohio-tax-dollars-as-efforts-to-police-them-are-privatized-1.596318\">https:\/\/www.ohio.com\/news\/local\/charter-schools-misspend-millions-of-ohio-tax-dollars-as-efforts-to-police-them-are-privatized-1.596318<\/a> ; <a href=\"http:\/\/www.cleveland.com\/metro\/index.ssf\/2015\/03\/ohios_charter_schools_ridicule.html\">https:\/\/www.cleveland.com\/metro\/index.ssf\/2015\/03\/ohios_charter_schools_ridicule.html<\/a> ; <a href=\"http:\/\/www.dispatch.com\/content\/stories\/local\/2014\/12\/18\/kasich-to-revamp-ohio-laws-on-charter-schools.html\">https:\/\/www.dispatch.com\/content\/stories\/local\/2014\/12\/18\/kasich-to-revamp-ohio-laws-on-charter-schools.html<\/a> ; <a href=\"https:\/\/www.washingtonpost.com\/news\/answer-sheet\/wp\/2015\/06\/12\/troubled-ohio-charter-schools-have-become-a-joke-literally\/\">https:\/\/www.washingtonpost.com\/news\/answer-sheet\/wp\/2015\/06\/12\/troubled-ohio-charter-schools-have-become-a-joke-literally\/<\/a><\/p>\n<p><a href=\"#_ednref6\" name=\"_edn6\">[vi]<\/a> HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.<\/p>\n<p><a href=\"#_ednref7\" name=\"_edn7\">[vii]<\/a> CCSSO has received 23 grants from the Bill &amp; Melinda Gates Foundation from \u201c2009 and earlier\u201d to 2016 collectively exceeding $100 million. <a href=\"http:\/\/www.gatesfoundation.org\/How-We-Work\/Quick-Links\/Grants-Database#q\/k=CCSSO\">https:\/\/www.gatesfoundation.org\/How-We-Work\/Quick-Links\/Grants-Database#q\/k=CCSSO<\/a><\/p>\n<p><a href=\"#_ednref8\" name=\"_edn8\">[viii]<\/a> <a href=\"http:\/\/www.gatesfoundation.org\/How-We-Work\/Quick-Links\/Grants-Database#q\/k=%22Stanford%20Center%20for%20Opportunity%20Policy%20in%20Education%22\">https:\/\/www.gatesfoundation.org\/How-We-Work\/Quick-Links\/Grants-Database#q\/k=%22Stanford%20Center%20for%20Opportunity%20Policy%20in%20Education%22<\/a><\/p>\n<p><a href=\"#_ednref9\" name=\"_edn9\">[ix]<\/a> Student Achievement Partners has received four grants from the Bill &amp; Melinda Gates Foundation from 2012 to 2015 exceeding $13 million. <a href=\"http:\/\/www.gatesfoundation.org\/How-We-Work\/Quick-Links\/Grants-Database#q\/k=%22Student%20Achievement%20Partners%22\">https:\/\/www.gatesfoundation.org\/How-We-Work\/Quick-Links\/Grants-Database#q\/k=%22Student%20Achievement%20Partners%22<\/a><\/p>\n<p><a href=\"#_ednref10\" name=\"_edn10\">[x]<\/a> The authors write that the standards they use are \u201cbased on\u201d the real <em>Standards<\/em>. But, that is like saying that Cheez Whiz is based on cheese. Some real cheese might be mixed in there, but it\u2019s not the product\u2019s most distinguishing ingredient.<\/p>\n<p><a href=\"#_ednref11\" name=\"_edn11\">[xi]<\/a> (e.g., the International Test Commission\u2019s (ITC) <em>Guidelines for Test Use<\/em>; the ITC <em>Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores<\/em>; the ITC <em>Guidelines on the Security of Tests, Examinations, and Other Assessments<\/em>; the ITC\u2019s <em>International Guidelines on Computer-Based and Internet-Delivered Testing<\/em>; the European Federation of Psychologists\u2019 Association (EFPA) Test Review Model; the Standards of the Joint Committee on Testing Practices)<\/p>\n<p><a href=\"#_ednref12\" name=\"_edn12\">[xii]<\/a> Despite all the adjectives and adverbs implying newness to PARCC and SBAC as \u201cNext Generation Assessment\u201d, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly \u201chigher-order\u201d, more \u201cauthentic\u201d, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.<\/p>\n<p>See, for example, <strong>For California: <\/strong>Michael W. Kirst &amp; Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, <em>Phi Delta Kappan<\/em>, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. <em>Education Reporter, 141<\/em>. See also Klein, David. (2003). \u201cA Brief History Of American K-12 Mathematics Education In the 20th Century\u201d, In James M. Royer, (Ed.), <em>Mathematical Cognition<\/em>, (pp. 175\u2013226). Charlotte, NC: Information Age Publishing. <strong>For Kentucky: <\/strong>ACT. (1993). \u201cA study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky\u2019s performance-based assessment results and ACT-tested Kentucky graduates of 1992\u201d. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent\u2019s point of view. Louisville, KY: Author. <a href=\"http:\/\/www.eddatafrominnes.com\/index.html\">https:\/\/www.eddatafrominnes.com\/index.html<\/a> ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky\u2019s first experiment. <strong>For Maryland: <\/strong>P. H. Hamp, &amp; C. B. Summers. (2002, Fall). \u201cEducation.\u201d In P. H. Hamp &amp; C. B. Summers (Eds.), A guide to the issues 2002\u20132003. Maryland Public Policy Institute, Rockville, MD. <a href=\"http:\/\/www.mdpolicy.org\/docLib\/20051030Education.pdf\">https:\/\/www.mdpolicy.org\/docLib\/20051030Education.pdf<\/a> ; Montgomery County Public Schools. (2002, Feb. 11). \u201cJoint Teachers\/Principals Letter Questions MSPAP\u201d, Public Announcement, Rockville, MD. <a href=\"http:\/\/www.montgomeryschoolsmd.org\/press\/index.aspx?pagetype=showrelease&amp;id=644\">https:\/\/www.montgomeryschoolsmd.org\/press\/index.aspx?pagetype=showrelease&amp;id=644<\/a> ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author. <a href=\"http:\/\/www.humrro.org\/corpsite\/page\/linking-teacher-practice-statewide-assessment-education\">https:\/\/www.humrro.org\/corpsite\/page\/linking-teacher-practice-statewide-assessment-education<\/a><\/p>\n<p><a href=\"#_ednref13\" name=\"_edn13\">[xiii] <\/a><a href=\"http:\/\/www.ccsso.org\/Documents\/2014\/CCSSO%20Criteria%20for%20High%20Quality%20Assessments%2003242014.pdf\">https:\/\/www.ccsso.org\/Documents\/2014\/CCSSO Criteria for High Quality Assessments 03242014.pdf<\/a><\/p>\n<p><a href=\"#_ednref14\" name=\"_edn14\">[xiv]<\/a> A rationale is offered for why they <em>had to<\/em> develop a brand new set of test evaluation criteria (p. 13). Fordham claims that new criteria were needed, which weighted some criteria more than others. But, weights could easily be applied to any criteria, including the tried-and-true, preexisting ones.<\/p>\n<p><a href=\"#_ednref15\" name=\"_edn15\">[xv]<\/a> For an extended critique of the CCSSO Criteria employed in the Fordham report, see \u201cAppendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments\u201d in Mark McQuillan, Richard P. Phelps, &amp; Sandra Stotsky. (2015, October). <em>How PARCC\u2019s false rigor stunts the academic growth of all students<\/em>. Boston: Pioneer Institute, pp. 62-68. <a href=\"http:\/\/pioneerinstitute.org\/news\/testing-the-tests-why-mcas-is-better-than-parcc\/\">https:\/\/pioneerinstitute.org\/news\/testing-the-tests-why-mcas-is-better-than-parcc\/<\/a><\/p>\n<p><a href=\"#_ednref16\" name=\"_edn16\">[xvi]<\/a> Doorey &amp; Polikoff, p. 14.<\/p>\n<p><a href=\"#_ednref17\" name=\"_edn17\">[xvii]<\/a> MCAS bests PARCC and SBAC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.<\/p>\n<p><a href=\"#_ednref18\" name=\"_edn18\">[xviii]<\/a> McQuillan, M., Phelps, R.P., &amp; Stotsky, S. (2015, October). How PARCC\u2019s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. <a href=\"http:\/\/pioneerinstitute.org\/news\/testing-the-tests-why-mcas-is-better-than-parcc\/\">https:\/\/pioneerinstitute.org\/news\/testing-the-tests-why-mcas-is-better-than-parcc\/<\/a><\/p>\n<p><a href=\"#_ednref19\" name=\"_edn19\">[xix]<\/a> It is perhaps the most enlightening paradox that, among Common Core proponents\u2019 profuse expulsion of superlative adjectives and adverbs advertising their \u201cinnovative\u201d, \u201cnext generation\u201d research results, the words \u201cdeeper\u201d and \u201chigher\u201d mean the same thing.<\/p>\n<p><a href=\"#_ednref20\" name=\"_edn20\">[xx]<\/a> The document asserts, \u201cThe Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.\u201d Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). <em>Criteria for high-quality assessment<\/em>. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7. <a href=\"https:\/\/edpolicy.stanford.edu\/publications\/pubs\/847\">https:\/\/edpolicy.stanford.edu\/publications\/pubs\/847<\/a><\/p>\n<p><a href=\"#_ednref21\" name=\"_edn21\">[xxi]<\/a> McQuillan, Phelps, &amp; Stotsky, p. 46.<\/p>\n<p><a href=\"#_ednref23\" name=\"_edn23\">[xxiii]<\/a> Linda Darling-Hammond, et al., pp. 16-18. <a href=\"https:\/\/edpolicy.stanford.edu\/publications\/pubs\/847\">https:\/\/edpolicy.stanford.edu\/publications\/pubs\/847<\/a><\/p>\n<p><a href=\"#_ednref24\" name=\"_edn24\">[xxiv]<\/a> For an in-depth discussion of these governance issues, see Peter Wood\u2019s excellent Introduction to <em>Drilling Through the Core<\/em>, <a href=\"http:\/\/www.amazon.com\/gp\/product\/0985208694\">https:\/\/www.amazon.com\/gp\/product\/0985208694<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Thomas B. Fordham Institute has released a report, Evaluating the Content and Quality of Next Generation Assessments,[i] ostensibly an evaluative comparison of four testing programs, the Common Core-derived SBAC and PARCC, ACT\u2019s Aspire, and the Commonwealth of Massachusetts\u2019 MCAS.[ii] &hellip; <a href=\"https:\/\/nonpartisaneducation.org\/blog1\/2016\/02\/fordham-institutes-pretend-research\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_s2mail":"no","footnotes":""},"categories":[25,80,31,90,89,32,47,33,71,23,14,1],"tags":[115,19,121,106,120,122,117,123,124,119,116,100,118,64],"class_list":["post-283","post","type-post","status-publish","format-standard","hentry","category-college-prep","category-common-core","category-education-policy-2","category-education-reform","category-ethics","category-k-12","category-mathematics","category-reading-writing","category-research-ethics","category-richard-p-phelps","category-testingassessment","category-uncategorized","tag-ccsso","tag-cresst","tag-evaluation","tag-fordham-institute","tag-gates-foundation","tag-guidelines","tag-humrro","tag-protocols","tag-review","tag-rigor","tag-scope","tag-standards","tag-student-achievement-partners","tag-testing"],"_links":{"self":[{"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/posts\/283","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/comments?post=283"}],"version-history":[{"count":10,"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/posts\/283\/revisions"}],"predecessor-version":[{"id":1505,"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/posts\/283\/revisions\/1505"}],"wp:attachment":[{"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/media?parent=283"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/categories?post=283"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nonpartisaneducation.org\/blog1\/wp-json\/wp\/v2\/tags?post=283"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}