Fordham report predictable, conflicted

On November 17, the Massachusetts Board of Elementary and Secondary Education (BESE) will decide the fate of the Massachusetts Comprehensive Assessment System (MCAS) and the Partnership for Assessment of College Readiness for College and Careers (PARCC) in the Bay State. MCAS is homegrown; PARCC is not. Barring unexpected compromises or subterfuges, only one program will survive.

Over the past year, PARCC promoters have released a stream of reports comparing the two testing programs. The latest arrives from the Thomas B. Fordham Institute in the form of a partial “evaluation of the content and quality of the 2014 MCAS and PARCC “relative to” the “Criteria for High Quality Assessments”[i] developed by one of the organizations that developed Common Core’s standards—with the rest of the report to be delivered in January, it says.[ii]

PARCC continues to insult our intelligence. The language of the “special report” sent to Mitchell Chester, Commissioner of Elementary and Secondary Education, reads like a legitimate study.[iii] The research it purports to have done even incorporated some processes typically employed in studies with genuine intentions of objectivity.

No such intentions could validly be ascribed to the Fordham report.

First, Common Core’s primary private financier, the Bill & Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the standards and its associated testing programs. A cursory search through the Gates Foundation web site reveals $3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or “general operating support.”[iv] Gates awarded an additional $653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio ($2,596,812), reputedly the nation’s worst.[v]

The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:

– the Human Resources Research Organization (HumRRO), which will deliver another pro-PARCC report sometime soon,[vi]
– the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the “Criteria.”, [vii]
– the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of the other federally-subsidized Common Core-aligned testing program, the Smarter-Balanced Assessment Consortium (SBAC),[viii] and
– Student Achievement Partners, the organization that claims to have inspired the Common Core standards[ix]

Fordham acknowledges the pervasive conflicts of interest it claims it faced in locating people to evaluate MCAS versus PARCC. “…it is impossible to find individuals with zero conflicts who are also experts”.[x] But, the statement is false; hundreds, perhaps even thousands, of individuals experienced in “alignment or assessment development studies” were available.[xi] That they were not called reveals Fordham’s preferences.

A second reason Fordham’s intentions are suspect rests with their choice of evaluation criteria. The “bible” of test developers is the Standards for Educational and Psychological Testing, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.

Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-sponsored the development of Common Core’s standards (Council for Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond’s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others. Thus, Fordham compares PARCC to MCAS according to specifications that were designed for PARCC.[xii]

Had Fordham compared MCAS and PARCC using the Standards for Educational and Psychological Testing, MCAS would have passed and PARCC would have flunked. PARCC has not yet accumulated the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest it will fail on all three counts.[xiii]

Third, PARCC should have been flunked had Fordham compared MCAS and PARCC using all 24+ of CCSSO’s “Criteria.” But Fordham chose to compare on only 15 of the criteria.[xiv] And those just happened to be the criteria favoring PARCC.

Fordham agreed to compare the two tests with respect to their alignment to Common Core-based criteria. With just one exception, the Fordham study avoided all the criteria in the groups “Meet overall assessment goals and ensure technical quality”, “Yield valuable report on student progress and performance”, “Adhere to best practices in test administration”, and “State specific criteria”[xv]

Not surprisingly, Fordham’s “memo” favors the Bay State’s adoption of PARCC. However, the authors of How PARCC’s false rigor stunts the academic growth of all students[xvi], released one week before Fordham’s “memo,” recommend strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also do not recommend continuing with the current MCAS, which is also based on Common Core’s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state’s mandated tests to a more responsible agency.

Perhaps the primary conceit of Common Core proponents is that ordinary multiple-choice-predominant standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)[xvii]. Ironically, it is they—opponents of traditional testing regimes—who propose that standardized tests measure everything. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.

Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:

“Research: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.”[xviii]

Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure “sustained research” in the one- or two-minute span of a standardized test question? In PARCC tests, this is done by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.

But, that is not how research works. It is hardly the type of deliberation that comes to most people’s mind when they think about “sustained research”. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; “sustained research” should be measured more authentically.

The authors of the aforementioned Pioneer Institute report recommend, as their 7th policy recommendation for Massachusetts:

“Establish a junior/senior-year interdisciplinary research paper requirement as part of the state’s graduation requirements—to be assessed at the local level following state guidelines—to prepare all students for authentic college writing.”[xix]

PARCC and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two.[xx] It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC tests “deeper” than others. In practice, the alleged deeper parts of PARCC are the most convoluted and superficial.

Appendix A of the source document for the CCSSO criteria provides three international examples of “high-quality assessments” in Singapore, Australia, and England.[xxi] None are standardized test components. Rather, all are projects developed over extended periods of time—weeks or months—as part of regular course requirements.

Common Core proponents scoured the globe to locate “international benchmark” examples of the type of convoluted (i.e., “higher”, “deeper”) test questions included in PARCC and SBAC tests. They found none.

Dr. Richard P. Phelps is editor or author of four books: Correcting Fallacies about Educational and Psychological Testing (APA, 2008/2009); Standardized Testing Primer (Peter Lang, 2007); Defending Standardized Testing (Psychology Press, 2005); and Kill the Messenger (Transaction, 2003, 2005), and founder of the Nonpartisan Education Review (

[i] 03242014.pdf

[ii] Michael J. Petrilli & Amber M. Northern. (2015, October 30). Memo to Dr. Mitchell Chester, Commissioner of Elementary and Secondary Education, Massachusetts Department of Elementary and Secondary Education. Washington, DC: Thomas P. Fordham Institute.

[iii] Nancy Doorey & Morgan Polikoff. (2015, October). Special report: Evaluation of the Massachusetts Comprehensive Assessment System (MCAS) and the Partnership for the Assessment of Readiness for College and Careers (PARCC). Washington, DC: Thomas P. Fordham Institute.


[v] See, for example, ; ; ;

[vi] HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.

[vii] CCSSO has received 22 grants from the Bill & Melinda Gates Foundation from “2009 and earlier” to 2015 exceeding $90 million.


[ix] Student Achievement Partners has received four grants from the Bill & Melinda Gates Foundation from 2012 to 2015 exceeding $13 million.

[x] Doorey & Polikoff, p. 4.

[xi] To cite just one example, the world-renowned Center for Educational Measurement at the University of Massachusetts-Amherst has accumulated abundant experience conducting alignment studies.

[xii] For an extended critique of the CCSSO criteria employed in the Fordham report, see “Appendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments” in Mark McQuillan, Richard P. Phelps, & Sandra Stotsky. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute, pp. 62-68.

[xiii] Despite all the adjectives and adverbs implying newness to PARCC and SBAC as “Next Generation Assessment”, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly “higher-order”, more “authentic”, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.

See, for example, For California: Michael W. Kirst & Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, Phi Delta Kappan, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. Education Reporter, 141. See also Klein, David. (2003). “A Brief History Of American K-12 Mathematics Education In the 20th Century”, In James M. Royer, (Ed.), Mathematical Cognition, (pp. 175–226). Charlotte, NC: Information Age Publishing. For Kentucky: ACT. (1993). “A study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky’s performance-based assessment results and ACT-tested Kentucky graduates of 1992”. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent’s point of view. Louisville, KY: Author. ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky’s first experiment. For Maryland: P. H. Hamp, & C. B. Summers. (2002, Fall). “Education.” In P. H. Hamp & C. B. Summers (Eds.), A guide to the issues 2002–2003. Maryland Public Policy Institute, Rockville, MD. ; Montgomery County Public Schools. (2002, Feb. 11). “Joint Teachers/Principals Letter Questions MSPAP”, Public Announcement, Rockville, MD. ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author.

[xiv] Doorey & Polikoff, p. 23.

[xv] MCAS bests PARCC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.

[xvi] McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute.

[xvii] It is perhaps the most enlightening paradox that, among Common Core proponents’ profuse expulsion of superlative adjectives and adverbs advertising their “innovative”, “next generation” research results, the words “deeper” and “higher” mean the same thing.

[xviii] The document asserts, “The Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.” Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). Criteria for high-quality assessment. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7.

[xix] McQuillan, Phelps, & Stotsky, p. 46.

[xxi] Linda Darling-Hammond, et al., pp. 16-18.

Posted in Common Core, Education policy, Education Reform, Mathematics, Reading & Writing, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , , , , , , , | Comments Off on Fordham report predictable, conflicted

Trickle Down Academic Elitism

When [mid-20th century] I was in a private school in Northern California, I won a “gold” medal for first place in a track meet of the Private School Conference of Northern California for the high jump [5’6”]—which I thought was pretty high.

My “peers” in the Bay Area public high schools at the time were already clearing 6 feet, but I was, in fact, not in their league.

As the decades went by, high school students were clearing greater and greater heights, in the same way records were falling in all other sports.

The current high school record, set in July 2002, by Andra Manson of Kingston, Jamaica, at a high school in Brenham, Texas, is 7 feet, 7 inches. [high jump, not pole vault].

How did this happen? Well, not by keeping progress in the high jump a secret.

A number of private schools in the Boston area have put an end to all academic prizes and honors, to keep those who don’t get them from feeling bad, but they still keep score in games, and they still report on and give prizes for elite academic performances.

It seems obvious to me that by letting high school athletes know that the record for the high jump was moving up from five feet nothing to 7 feet, 7 inches, some large group of high school athletes decided to work at it and try to jump higher, with real success since 1950.

The Boston Globe has about 150 pages a year on high school sports, highlighting best performances in and results from all manner of athletic competitions. This must fuel ambition in other high school athletes to achieve more themselves, and even to merit mention in the media.

When it comes to high school academic achievement, on the other hand, The Boston Globe seems content to devote one page a year just to the valedictorians at the public high schools in Boston itself [usually half of them were born in another country, it seems].

Why is it that we are comfortable encouraging, supporting, seeking and celebrating elite performance in high school sports, but we seem shy, embarrassed, reluctant, ashamed, and even afraid to encourage, support, and acknowledge—much less celebrate—outstanding academic work by high school students?

Whatever the reasons, it seems likely that what we do will bring us more and better athletic efforts and achievements by high school students, while those students who really do want to achieve at the elite levels in their academic work can just keep all that to themselves, thank you very much. Seems pretty stupid to me, if we want, as we keep saying we want, higher academic achievement in our schools. Just a thought.

Posted in College prep, Education Fraud, Education policy, K-12, Testing/Assessment, Will Fitzhugh | Tagged , , , | Comments Off on Trickle Down Academic Elitism

Common Core’s Language Arts

It is often said that scientific writing is dull and boring to read. Writers choose words carefully; mean for them to be interpreted precisely and, so, employ vocabulary that may be precise, but is often obscure. Judgmental terms—particularly the many adjectives and adverbs that imply goodness and badness or better and worse—are avoided. Scientific text is expected to present a neutral communication background against which the evidence itself, and not the words used to describe the evidence, can be evaluated on its own merits.

As should be apparent to anyone exposed to Common Core, PARCC, and SBAC publications and presentations, most are neither dull nor boring, and they eschew precise, obscure words. But, neither are they neutral or objective. According to their advocates, Common Core, PARCC, and SBAC are “high-quality”, “deeper”, “richer”, “rigorous”, “challenging”, “stimulating”, “sophisticated”, and assess “higher-order” and “critical” thinking, “problem solving”, “deeper analysis”, “21st-Century skills”, and so on, ad infinitum.

By contrast, they describe the alternatives to Common Core and Common Core consortia assessments as “simple”, “superficial”, “low-quality”, and “dull” artifacts of a “19th-Century” “factory model of education” that relies on “drill and kill”, “plug and chug”, “rote memorization”, “rote recall”, and other “rotes”.

Our stuff good. Their stuff bad. No discussion needed.

This is not the stuff of science, but of advertising. Given the gargantuan resources Common Core, PARCC, and SBAC advocates have had at their disposal to saturate the media and lobby policymakers with their point of view, that opponents could muster any hearing at all is remarkable. [1]

The word “propaganda” may sound pejorative, but it fits the circumstance. Advocates bathe their product in pleasing, complimentary vocabulary, while splattering the alternatives with demeaning and unpleasant words. Only sources supportive of the preferred point of view are cited as evidence. Counter evidence is either declared non-existent and suppressed, or discredited and misrepresented. [2]

Their version of “high-quality” minimizes the importance of test reliability (i.e., consistency, and comparability of results), an objective and precisely measurable trait, and maximizes the importance of test validity, an imprecise and highly subjective trait, as they choose to define it. [3] “High-quality”, in Common Core advocates’ view, comprises test formats and item types that match their progressive, constructivist view of education. [4] “High-quality” means more subjective, and less objective, testing. “High-quality” means tests built the way they like them.

“High quality” tests are also more expensive, take much longer to administer, and unfairly disadvantage already disadvantaged children, due to their lower likelihood of familiarity with complex test formats and computer-based assessment tools.

Read, for example, the report Criteria for high-quality assessment, written by Linda Darling-Hammond’s group at Stanford’s education school, people at the Center for Educational Research on Standards and Student Testing (CRESST), housed at UCLA, and several other sympathizers. [5] These are groups with long histories of selective referencing and dismissive reviews. [6] The little research that supports their way of seeing things is highly praised. The far more voluminous research that contradicts their recommendations is ignored, demonized, ridiculed, or declared non-existent.

Unlike a typical scientific study write-up, Criteria for high-quality assessment brims with adjectival and adverbial praise for its favored assessment characteristics. Even its 14-page summary confronts the reader with “high-quality” 24 times; “higher” 18 times; “high-fidelity” seven times; “higher-level” four times; “deep”, “deeply”, or “deeper” 14 times; “critical” or “critically” 17 times; and “valuable” nine times. [7]

As Orwell said, control language and you control public policy. Common Core, PARCC, and SBAC proponents are guilty not only of biased promotion, selective referencing, and dismissive reviews but of “floating” the definitions of terms.

For example, as R. James Milgram explains:

“The dictionary meaning of “rigorous” in normal usage in mathematics is “the quality or state of being very exact, careful, or strict” but in educationese it is “assignments that encourage students to think critically, creatively, and more flexibly.” Likewise, educationists may use the term rigorous to describe “learning environments that are not intended to be harsh, rigid, or overly prescriptive, but that are stimulating, engaging, and supportive.” In short the two usages are almost diametrically opposite.” [8]

Such bunkum has sold us Common Core, PARCC, and SBAC. The progressive education/constructivist radical egalitarians currently running many U.S. education schools can achieve their aims simply by convincing super-naïve but well-endowed foundations and the U.S. Education Department (under both Republican and Democratic administrations) that they intend “higher”, “deeper”, “richer”, “more rigorous” education when, in fact, they target a dream of Rousseau-ian-inspired discovery-learning. They crave the open-inquiry, students-build-your-own-education of Summerhill School, even for the poor, downtrodden students who arrive at school with little to build from.

So many naïve, gullible, well-intentioned wealthy foundations dispensing money to improve US education. So many experienced, well-rehearsed, true believers ready to channel that money in the direction that serves their goals.


[1] For example, from the federal government alone, PARCC received $185,862,832 on August 13, 2013. ; SBAC received $175,849,539 to cover expenses to September 30, 2014. A complete accounting, of course, would include vast sums from the Bill and Melinda Gates Foundation, other foundations, the CCSSO, NGA, Achieve, and state governments.

[2] This behavior—of selective referencing and dismissive reviews (i.e., declaring that contrary research either does not exist or is for some other reason not worth considering)—is not new to the Common Core campaign. It has been the standard operating procedure among U.S. education research and policy elites for decades. But, some of the most prominent and frequent users of these censorial techniques in the past are now high-profile salespersons for the Common Core, PARCC, and SBAC. See, for example, Richard P. Phelps. (2012, June). Dismissive reviews: Academe’s Memory Hole. Academic Questions, 25(2), pp. 228–241. ; Phelps, R. P. (2007, Summer). The dissolution of education knowledge. Educational Horizons, 85(4), 232-247. ; and Phelps, R.P. (2009). Worse than Plagiarism? Firstness Claims & Dismissive Reviews, Nonpartisan Education Review / Resources. Retrieved August 29, 2015 from

[3] Ebel, Robert L. 1961. “Must All Tests Be Valid?” American Psychologist. 16, pp.640–647.

[4] “Constructivism is basically a theory — based on observation and scientific study — about how people learn. It says that people construct their own understanding and knowledge of the world, through experiencing things and reflecting on those experiences.” Here are two descriptions of constructivism: one supportive, and one critical,

[5] Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, Claude M. Steele. (2013, June). Criteria for high-quality assessment. Stanford Center for Opportunity Policy in Education; Center for Research on Standards and Student Testing; & Learning Sciences Research Institute, University of Illinois at Chicago.

[6] See, for example, Richard P. Phelps. (2012). The rot festers: Another National Research Council report on testing. New Educational Foundations, 1. ; (2015, July); The Gauntlet: Think tanks and federally funded centers misrepresent and suppress other education research. New Educational Foundations, 4.

[7] CCSSO. (2014). Criteria for procuring and evaluating high-quality assessments.

[8] See Dr. Milgram’s observation is expressed in R.P. Phelps & R.J. Milgram. (2014, September). The revenge of K-12: How Common Core and the new SAT lower college standards in the U.S. Boston: Pioneer Institute, p. 41.

Posted in Common Core, Education policy, Education Reform, Ethics, K-12, research ethics, Richard P. Phelps, Testing/Assessment, Uncategorized | Tagged , , , , , , | 1 Comment

Wayne Bishop’s observations on the Aspen Ideas Festival session, “Is Math Important?”

Editors’ Note:

David Leonhardt is Washington Bureau Chief for the New York Times, won a Pulitzer Prize for his reporting on economic issues, and majored in applied mathematics as an undergraduate at Yale. Mr. Leonhardt chaired the panel, “Deep Dive: Is Math Important?” an “event” in the program track “The Beauty of Mathematics”. Other program track events included individual lectures from each of the panelists.

Mathematicians might consider the panel composition rather odd, and ideologically one-sided. Three panelists are not mathematicians, but are wholehearted believers in constructivist approaches to math education, often derided as “fuzzy math”. Two of them claim, ludicrously, that high-achieving East Asian countries teach math their way. The aforementioned panelists are: journalist Elizabeth Green, education professor Jo Boaler, and College Board’s David Coleman, with a degree in English lit and classical philosophy. When only one side is allowed to talk, of course, it can make any claims it likes.

Watch for yourself: Aspen Ideas Festival: Deep Dive: Is Math Important?

Professor Bishop’s essay, written in the form of a letter to David Leonhardt, can be found here.


Posted in Education Fraud, K-12, math, Mathematics, Wayne Bishop | Comments Off on Wayne Bishop’s observations on the Aspen Ideas Festival session, “Is Math Important?”

David Coleman in Charge

Wayne Bishop recently made me aware of the unfortunately completely one-sided discussion of US mathematics education at the recent Aspen Ideas Festival.

David Leonhardt is Washington Bureau Chief for the New York Times, won a Pulitzer Prize for his reporting on economic issues, and majored in applied mathematics as an undergraduate at Yale. Mr. Leonhardt chaired the panel, “Deep Dive: Is Math Important?” an “event” in the program track “The Beauty of Mathematics”. Other program track events included individual lectures from each of the panelists.

Mathematicians might consider the panel composition rather odd, and ideologically one-sided. Three panelists are not mathematicians, but are wholehearted believers in constructivist approaches to math education, often derided as “fuzzy math”. Two of them claim, ludicrously, that high-achieving East Asian countries teach math their way. The aforementioned panelists are: journalist Elizabeth Green, education professor Jo Boaler, and College Board’s David Coleman, with a degree in English lit and classical philosophy. When only one side is allowed to talk, of course, it can make any claims it likes.

Watch for yourself.

Observe David Coleman from minute 25:40 on, starting while Elizabeth Green is talking.

Then listen, from minute 26:55 on, as he asserts a “kind of dirty little secret” that:

“the worst math problems of all are test prep problems” these are problems manufactured to prepare for an exam and they are typically done, utterly… … if any good science or craft goes into making a really reliable assessment problem for an exam, none of that goes into test prep material. The test developers have hidden from… …because they are trying to hide the exam from the test prep people, to try to keep it, right?”

Test developers, including the College Board, at least until Coleman arrived, have made available for free complete retired operational exams for test prep. These are not “manufactured to prepare for an exam”. They are the highest quality test items that have survived the lengthy gauntlet of editorial review, item review, bias review, more editing, field trials, more editing, comprehensive statistical analyses, more editing if needed, and still more statistical analysis.

Coleman has been at College Board for over three years, plenty enough time to learn the trade. That, even now, he says “if any good science or craft goes into making a really reliable assessment problem” should frighten us all.

Posted in College prep, Common Core, Education policy, K-12, math, Mathematics, Richard P. Phelps, Testing/Assessment | Tagged , , , , , , , | 1 Comment

Jay Mathews: pt 1 of 3 pt Review of Caleb Rossiter ‘s new book: “Aint Nobody Be Learnin’ Nothin’: The Fraud and the Fix for High Poverty Schools”

Mayor, Council Members, State Board of Education Members,

This is assigned reading.  It’s time to take off the rose colored glasses and stop the routine affirmations of “I support education reform” without looking past the polished press releases.  Please stop pretending that you don’t know that principals and teachers are under intense pressure to give unearned grades.

Caleb Rossiter is describing his experiences in a DCPS high school AND a DC public charter high school.

Erich Martel, Retired DCPS high school teacher (1969-2011: Cardozo HS, Wilson HS, Phelps ACE HS,

Teacher assails practice of giving passing grades to failing students

By Jay Mathews Columnist May 17 at 12:33 PM

Caleb Stewart Rossiter, a college professor and policy analyst, decided to try teaching math in the D.C. schools. He was given a pre-calculus class with 38 seniors at H.D. Woodson High School . When he discovered that half of them could not handle even second-grade problems, he sought out the teachers who had awarded the passing grades of D in Algebra II they needed to take his high-level class.

There are many bewildering stories like this in Rossiter’s new book, “Ain’t Nobody Be Learnin’ Nothin’: The Fraud and the Fix for High-Poverty Schools,” the best account of public education in the nation’s capital I have ever read. It will take me three columns to do justice to his revelations about what is being done to the District’s most distracted and least productive students.

Teachers will tell you it is a no-no to ask other teachers why they committed grading malpractice. Rossiter didn’t care. Three of the five teachers he sought had left the high-turnover D.C. system, but the two he found were so candid I still can’t get their words out of my mind.

The first, an African immigrant who taught special education for 20 years, was stunned to see one student’s name on Rossiter’s list. “Huh!” Rossiter quoted the teacher as saying. “That boy can’t add two plus two and doesn’t care! What’s he doing in pre-calculus? Yes of course I passed him — that’s a gentleman’s D. Everybody knows that a D for a special education student means nothing but that he came in once in a while.”

The second teacher had transferred from a private school in a Southern city so his wife could get her dream job in the Washington area. He explained that he gave a D to one disruptive girl on Rossiter’s list because, Rossiter said, “he didn’t want to have her in class ever again.” Her not-quite-failing grade was enough to get the all-important check mark for one of the four years of math required for graduation.

Rossiter moved to Tech Prep, a D.C. charter school, where he says he discovered the same aversion to giving F’s. The school told him to raise to D’s the first-quarter failing grades he had given to 30 percent of his ninth-grade algebra students. He quit instead.

Tech Prep officials indicated the F’s would have violated special-education rules. D.C. schools officials have not responded to my request for comment.

I share Rossiter’s view that such rule-bending is common in many D.C. schools overloaded with struggling students. The trend has been aggravated by computerized credit-recovery courses that take a few weeks and allow students to escape high school lives they loathe. Former D.C. history teacherErich Martel has done much research on this. I have pointed out that the educators enabling such grade inflation might have the students’ best interests at heart. The students won’t stay in school, so giving them a diploma, no matter how fraudulent, might provide them with a chance to get some kind of job and, eventually, as they mature, sort themselves out.

It is very hard to maintain that Pollyanna-ish take on grade inflation after reading Rossiter’s book. He wrongly overlooks or discounts evidence of improvements in teaching and learning in many schools here and elsewhere, but his main point is unassailable. Lying to so many students, their families and other teachers is wrong and yet is rarely discussed in professional circles.

High school graduation rates, as reported by school districts with no independent checks, have been climbing. Public school officials said the D.C. graduation rate increased five percentage points in the past four years. The U.S. rate rose from 74 percent in 2007 to 81 percent in 2012, according to the Education Week Research Center.

I know of no research on how much of that increase can be attributed to fantasyland report cards. Rossiter says the strongest blow against fraud would be to reverse the national trend toward insisting that every high school student get a college-preparatory education before graduation.

I thought that trend was good. Most of those courses also help in the workplace. But Rossiter’s book is forcing me to reconsider.

Posted in College prep, Education Fraud, Education policy, Education Reform, Erich Martel, Ethics, K-12 | Tagged , , , , , , , , , | Comments Off on Jay Mathews: pt 1 of 3 pt Review of Caleb Rossiter ‘s new book: “Aint Nobody Be Learnin’ Nothin’: The Fraud and the Fix for High Poverty Schools”

Starting school already behind

Underprivileged students start first grade already two grade levels behind more privileged students. The obvious solution to this discrepancy is to give the underprivileged kids more time, as in another year at the beginning of primary school. That would appear to some to be grade retention (which some do not like), and it would also cost more (which others don’t like), because the underprivileged students would be getting the extra year in school they need.

But, politics intervenes, from all sides. Radical egalitarians, currently a dominant philosophical force in our schools of education, say grade retention is wrong and does not work (ignoring the fact that it works quite well in other countries where mastery is emphasized), and would be happy to deliberately hold the advanced students back until the underprivileged students can catch up. Self-titled education reformers put all responsibility on the teachers–“the single greatest school-based influence on education achievement” (if you count students as not “a school-based factor”). Neither approach is realistic or fair.

There are some impediments to practical education solutions for which both the polar sides in education debates are responsible. But, because the press and policy-makers rarely talk to anyone in the no-man’s-land in between the opposing vested interest groups, they rarely consider the obvious and the practical.

Posted in Education policy, K-12, Richard P. Phelps | 3 Comments

Tom Oakland, 1939-2015

Tom Oakland

Thomas D. Oakland, 1939-2015

Tom Oakland epitomized the gentleman scholar. He was a world-renowned expert in educational assessment and evaluation–one of the best. He was also a tireless supporter of the Nonpartisan Education Review, from its beginning until his untimely end.

I last saw Tom at the International Testing Commission conference in San Sebastian, Spain. While many other movers and shakers maximized their time networking with like kind, Tom spent pretty much the entire time manning the ITC booth in the exhibit area. And, when traffic was slow at the booth, he circulated among the poster presenters. You know the poster presenters–the young, typically graduate students, trying to break into the profession.

Tom went from poster to poster, at each thanking the presenter for attending the conference and inquiring about their work and plans and making suggestions. Absolutely selfless. Always focused on the common good.

They don’t make them like Tom anymore.

Many–worldwide–will miss him.

Richard P. Phelps


Posted in Education policy, research ethics, Testing/Assessment, Tom Oakland, Uncategorized | Comments Off on Tom Oakland, 1939-2015

Robert T. Oliphant, 1924-2014

Bob Oliphant
Robert T. Oliphant

Bob Oliphant passed away in June, 2014. He was one of the most optimistic and generous people I’ve ever met, and one of my best friends. That despite the fact that we never met face-to-face—a typical 21st-Century relationship, you might say.

Born in 1924 in Tulsa, Oklahoma, Bob spent most of his childhood in Beaver, Pennsylvania, outside of Pittsburgh, before enlisting in the Army Air Corps and serving as a weather observer in England during World War II and in Germany during the occupation. After the war, he attended Pennsylvania’s Washington & Jefferson College with the intention of working as an accountant. Instead, after graduation, he played with several jazz big bands for several years and then with a one, the Crazy Cats trio, touring the Midwest for a decade. Eventually, he returned to higher education for a PhD in philology* at Stanford University with Herbert Merritt. He joined the faculty at California State University in Northridge in 1959, and never left. While publishing the expected complement of scholarly articles, on Chaucer, Shakespeare, the history of English, and philology, he also wrote four novels, Julia, Shangri-La, A Trumpet for Jackie, and the best known, A Piano for Mrs. Cimino, which became a film starring Bette Davis in one of her last roles. Mrs. Cimino was based on the true story of a woman who recovered from dementia through reality-orientation therapy.

Bob also wrote music and text for the performing arts, such as a monologue and four songs based on a Dostoevsky story entitled The Underground Man: A Sermon with Occasional Songs, and a musical score to accompany Wilde’s The Importance of Being Earnest. Bob played piano for several hours a day until shortly before he died, and wrote a poem daily to his wife, Jane.

Bob Oliphant was the most prolific contributor to the Nonpartisan Education Review, and a steadfast advocate of our mission. The pieces he wrote for the Review are listed and linked below. They range from several short essays to reference books hundreds of pages long.

More of Bob’s work can be found at other sites, such as Education Views, The Moral Liberal, and the Los Angeles Times. Amazon maintains two author pages for him, one for his scholarship and another for his fiction.

The Los Angeles Times, which published so many of his essays and letters to the editor, also published his obituary, which reads, in part:

October 25, 1924 – June 28, 2014 Robert T. Oliphant, a true Renaissance man, passed away peacefully at his home at the age of 89. He is survived by his wife, Jane, two sons Matthew (Cathy) and Jason (Eva), 6 grandchildren and 3 great-grandchildren. Robert proudly served in WWII, received his PhD from Stanford University and taught in the English department at CSUN for 36 years.

What would be Bob’s advice to the Nonpartisan Education Group, to which he contributed so much time and energy? Probably, as he so often said to me, “Keep Swingin”. I was never sure if it was a baseball or a jazz reference. Maybe both.

We miss you, Bob.

Richard P. Phelps
*Philology: the branch of knowledge that deals with the structure, historical development, and relationships of a language or languages.


Robert T. Oliphant’s works in the Nonpartisan Education Review (in reverse chronological order)


Posted in Bob Oliphant, College prep, Education policy, K-12, Richard P. Phelps, Testing/Assessment, Uncategorized | Comments Off on Robert T. Oliphant, 1924-2014

Selling ‘Performance’ Assessments with Inaccurate Pictures from Kentucky

By Richard Innes, new in the Nonpartisan Education Review.

See more at:

Posted in Common Core, Education policy, K-12, research ethics, Richard Innes, Testing/Assessment | Tagged , , , , , , | Comments Off on Selling ‘Performance’ Assessments with Inaccurate Pictures from Kentucky

Beware of Test Scores Masquerading as Data

A semi-taboo area of insufficient discussion is the reliability of the test score data from the statewide, nationwide, and international standard tests; for example, our National Assessment of Educational Progress (NAEP), but not nearly just the NAEP test scores. You can learn about all of the reliability issues from experts like Richard Phelps, and Richard Innes.

I have frequently raised concerns about test score data generated by exams that don’t impact the students that take them; that is, where a poor effort by a student does not adversely impact the student. The norm for most national, international, and some statewide standardized testing is that the students taking them have no incentive to give their top effort. NAEP – the so-called nation’s report card is among the no-stakes-for-the-students. Expressing a concern for that data reliability in an e-mail or a conversation issue nearly always yields no response, or a vague, dismissive response; something approaching ‘emperor has no clothes’ proportions.

The discovery that prompted this blog was Richard Phelps’ pronouncement that:

“Indeed, one drawback to the standardized student tests with no stakes for the students is that student effort does not just vary, it varies differently by demographic sub-group. The economists who like to use such scores for measuring school and teacher value-added just assume away these and other critical flaws.”

So, while such test scores might be broadly accurate – more substantive persuasion please – they may just be numbers masquerading as data for some of the uses they have been put to. And it’s another reason to question the current system’s extensive reliance on top-down-only-accountability to formal authority that must be based on objective apple-to-apple comparisons. We need robust universal parental school choice to exploit subjective, bottom-up-accountability to clients; to employ a mix of top-down and bottom-up accountability to manage a system of diverse children and educators.

I’m willing to rely on NAEP and PISA test score data (etc.), with some reservations and reticence, because the data are consistent with the high stakes data and other indicators of school system effectiveness, and with established economic theory. But the no-stakes-for-the-students test score issue needs a lot more study and discussion.

Richard Phelps –

Richard Innes –

emperor has no clothes –

pronouncement –


Posted in Education policy, John Merrifield, K-12, Richard Innes, Richard P. Phelps, Testing/Assessment | Comments Off on Beware of Test Scores Masquerading as Data

No Child Left Behind Renewal: Blinders on the Education Policy Horse

Two weeks ago, The Honorable Lamar Alexander of Tennessee, the chair of the Senate Committee on Health, Education, Labor, and Pensions (HELP) invited three allegedly independent education researchers to discuss possible revisions to the Elementary and Secondary Education Act, currently known as No Child Left Behind. All other testimony came from practicing educators, politicians, or interest group representatives.

The three could have, and should have, broadly represented the research literature relevant to the issues at hand. Instead, they did the opposite. They promoted their own research and that of a tiny group of their friends and, when not ignoring the vast abundance of useful and relevant evidence available, openly declared it to be nonexistent.

For example, by far most independent education research, and most of the research pertinent to the issues the Congress is now considering, has been conducted by psychologists. Yet, research psychologists were nowhere represented in the HELP Committee hearings, and I do not just mean in person. I’ve combed the six pages of reference sources listed in the testimonies. Here’s a breakdown: about 175 of the authors listed are economists; a couple of dozen are political scientists; another couple of dozen are education professors; and at least one sociologist is represented. The number of research psychologists? Zero.

Moreover, most of the economists and political scientists are members of a particular tiny group of researchers that has assumed for itself the role of Republican Party education policy advisors and exclusive spokespersons for education reform. Though “tiny” may exaggerate the group’s size. Former students—and students of former students—of Harvard Political Science professor Paul Peterson comprise a substantial proportion of the group. Others come from just a few other universities, such as Stanford and the Universities of Washington (State), Michigan, and Chicago, and their associated think tanks and research centers, such as the Hoover and Brookings Institutions and the National Bureau of Economic Research (which is based in Cambridge, Massachusetts and more Harvard than national). A well-known venue managed by the group is Education Next, where members publish, lavishly praise each other’s work, and, just as often, dismiss the existence of competing work done by others. Call them, inbred and unread.

Congresspersons may have expected their invited guests to present a panorama of the research evidence to consider. They gave them a pinhole to peep through.

The researchers recommended—and claimed that all the “best research” supported the recommendation—that the federal government continue to require annual testing in reading and mathematics, largely because it gives them great data sets to analyze, but also because annual testing is necessary to continue value-added measurement of teachers.

What of the main alternative—“grade span” testing—testing students at the end of the main levels of education, such as the end of the primary grades, the end of middle school, and the end of high school? Not even mentioned.

Completely ignored then were obvious points in favor of grade-span testing, such as: every one of the dozens of countries beating us on international tests requires grade-span testing and not annual testing; grade-span testing typically includes a full-battery of core subjects, whereas our annual testing includes only reading and math; and it is far easier to apply stakes to student test performance with grade-span tests than it would be with annual testing, and students learn far more when the tests they take “count” for them.

Indeed, one drawback to annual student tests, with no stakes for the students, for measuring teacher performance is that student effort does not just vary, it varies differently by demographic subgroup. The economists who like value-added measurement (VAM) just assume away these and other critical flaws.

The researchers testifying before the HELP Committee offered as their primary piece of evidence supporting the use of VAM two economists’ study of the District of Columbia Public Schools’ (DCPS) program. They asserted that teachers who received low VAM scores tended to leave DCPS and, on average, were then replaced by teachers who earned higher VAM scores. So, teacher quality improved, you see.

Intelligent people conducted this study, so one would assume that they recognized both the tautology and the regression toward the mean. That they proffer the study as valid evidence anyway seems cynical. Instead of ranking teachers by VAM scores, DCPS could have ranked them by height, weight, age, or their favorite colors. If height were the criteria for evaluating teacher performance, one would expect shorter teachers to leave DCPS and, if replaced at random, on average their replacements would be taller. That doesn’t mean that students learn more from taller teachers.

Republican lawmakers typically vouch for the value of competitive markets and eschew the harms of monopolies. Unregulated monopolies, most Republican Congresspersons can probably tell you, will inevitably: lower quality; raise prices; reduce output, efficiency, and innovation; and raise barriers to entry from potential competitors. True to form, the Republicans’ education policy advisors exhibit all the classic monopoly behaviors, not only poorly serving the American public, but poorly serving the Republicans, too. When will Republican politicians finally recognize that they are being used?

Researchers who tell you “listen only to us and do not talk to anyone else” are typically ruthlessly ambitious, but they are not shy. Have we witnessed this ethically challenged scholarly behavior before in education policy-making discussions? Why, yes, in 2001 when the No Child Left Behind Act was first written.

Posted in Education policy, K-12, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , , , , , , | 1 Comment

Using middle schoolers for anti-testing advocacy?

Superintendent Mark D. LaRoach
Vestal School District, New York

Dear Superintendent LaRoach:

I conduct research on the effects of standardized testing on student achievement. I have read over 3 thousand studies dating back a century and spanning over thirty countries. The results have been rather astounding–on average, a very strong positive effect. These results have been corroborated by hundreds of recent studies by cognitive psychologists.

Given the rabid hatred of standardized testing among many inside US public education, however, I have gotten used to routine demonizing of me and my work from education professors and advocates, …but from middle schoolers?

Would you please verify for me that the messages below indeed came from Vestal middle schoolers? I would also be interested in your perspective on this use of both public infrastructure–the email messages were sent from your server–and middle schoolers themselves for political advocacy.

Best Wishes, Richard P. Phelps


Jan 23 at 9:03 AM

Dear Mr. Phelps:

Imagine this: you’re sitting in your homeroom, anxiously waiting to get the test over with. How do you feel? Most students feel sick and tired, which just makes it more nerve-wracking than it already is. You don’t want students to feel so nervous that they vomit and have to take the test in it, do you? Would you? Most students don’t even want to go to school everyday, so why make them dread it more? Even the kids who are sick those days have to make it up, unfair. Teachers don’t like them either; they just sit staring at the room, watching kids suffer through these terrible standardized tests. A lot of people would agree with me that you should stop standardized testing.

On the Program for International Student Achievement, the United States slipped from 18th to 31st place in 2009. We want the US to be educated even without the tests. Did you know that 50-80% of year-over-year test score improvements were temporary and caused by fluctuations that had nothing to do with long term changes in learning. They should be permanent! Also, 44% of school districts had reduced the time spent on science, social studies and the arts by an average of 145 minutes per week in order to focus on reading and math. Other subjects are important too. Do you know how these tests make students feel? Standardized testing causes severe stress in younger students. That is very unhealthy for them. Some excessive testing may teach children to be good at taking tests, but they don’t prepare them for productive adult lives. We should prepare them to be productive adults.

The schools that are feeling the pressure of NCLB (No Child Left Behind)’s proficiency requirement are “gaming the system” to raise test scores, also known as cheating. It is unfair to students for the schools to cheat because not all of them do. People say things that people believe in to get on their side. Gerald W. Bracey says that, “qualities that standardized tests cannot measure are creativity, critical thinking, honesty, and so on”. Some students want that to be measured. Gregory J. Cizek says, “Anecdotes abound illustrating how testing…produces gripping anxiety in even the brightest students, and makes young children vomit or cry, or both”. That is pure torture to students. The low-performing students are encouraged to stay home. This isn’t fair to those high-performing students to take the test.

They say that most students believe that standardized tests are fair. Honestly, not one student or teacher I know thinks that the standardized tests are fair. This is because you have to sit still for over an hour taking a test that is really boring for most students. Therefore, I believe that standardized tests are not fair.

Now, you’ve read the whole email, what do you think about standardized tests? You should be thinking that you should really eliminate them. Some students vomit and have to take a vomit-covered test, gross. Please make these silly tests come to an end.

Thank you for your time,
Emma MacDonald

Emma MacDonald
Vestal Middle School
600 South Benita Blvd.
Vestal, NY 13850


Jan 26 at 9:03 AM

Dear Superintendent Phelps,

“U.S. students slipped from 18th in the world in 2000 to 31st place in 2009, with a similar decline in science and no change in reading.” Schools have spent way to much time not focusing all of the important subjects in schools and focusing on just one or two subjects, from standardized tests instead of focusing on studies and curriculum. Standardized tests hmmm, standardized tests what do I think of them… Some people think they are scary or nervous they get so extreme with nerve racking tests and “they’re life is depending on it” tests it gets out of hand. Sacrameto Bee reported that ‘test related jitters especially young students, are so common that the Stanford 9 exam comes with instructions on what to do with a test booklet in case a student vomits on it.

Standardized tests have no point to them it is a joke for having kids take them it is only help for teachers to judge the student and make them look. Even teachers are being pressured though because if the children d bad on the tests it is crucial for them. Schools feeling the pressure of the NCLB’s 100% proficiency requirement are “gaming the system” to raise test scores.

They say that standardized tests have a “positive effect” on student achievement. But actually you are telling lies. Students believe they are not productive and improvements from these tests are rare. Because based on a “study published by the Brookings institution found that 50-80% of year-over-year test score improvements were temporary. Therefore standardized tests should not be published and children should not be able to take them.

School testing to kids are things that they despise and are always worrying about failing them and not passing, or getting put in workshop classes and not living up to their parents expectations don’t put pressure on kids stop standardized tests. Just stop for a minute and think what you are doing to kids all across the world.

Thank you for your time,

Emilia Cappellett

Vestal Middle School
600 South Benita Blvd.
Vestal, NY 13850

Posted in Education policy, K-12, Richard P. Phelps, Testing/Assessment, Uncategorized | Comments Off on Using middle schoolers for anti-testing advocacy?

Overtesting or Overcounting?

Commenting on the Center for American Progress’s (CAP’s) report, Testing Overload in America’s Schools,

…and the Education Writers’ Association coverage of it,

… Some testing opponents have always said there is overtesting, no matter how much there has been actually (just like they have always said there is a “growing backlash” against testing). Given limited time, I will examine only one of the claims made in the CAP report:

“… in the Jefferson County school district in Kentucky, which includes Louisville, students in grades 6-8 were tested approximately 20 times throughout the year. Sixteen of these tests were district level assessments.” (p.19)

A check of the Jefferson County school district web site –

reveals the following: there are no district-developed standardized tests – NONE. All systemwide tests are either state developed or national exams.

Moreover, regular students in grades 6 and 7 take only one test per year – ONE – the K-Prep, though it is a full-battery test (i.e., five core subjects) with only one subject tested per day. (No, each subtest does not take up a whole day; more likely each subtest takes 1-1.5 hours, but slower students are given all morning to finish while the other students study something else in a different room and the afternoon is used for instruction.) So, even if you (I would say, misleadingly) count each subtest as a whole test, the students in grades 6 and 7 take only 5 tests during the year, none of them district tests.

So, is the Center for American Progress lying to us? Depends on how you define it. There is other standardized testing in grades 6 and 7. There is, for example, the “Alternate K-Prep” for those with disabilities, but students without disabilities don’t take it and students with disabilities don’t take the regular K-Prep.

Also there is the “Make-up K-Prep” which is administered to the regular students who were sick during the regular K-Prep administration times. But, students who took the K-Prep during the regular administration do not take the Make-up K-Prep.

There are also the ACCESS for ELLs and Alternate ACCESS for ELLs tests administered in late January and February, but only to English Language Learners. ACCESS is used to help guide the language training and course placement of ELL (or, ESL) students. Only a Scrooge would begrudge the district using these tests as “overtesting.”

And, that’s it. To get to 20 tests a year, the CAP had to assume that each and every student took each and every subtest. They even had to assume that the students sick during the regular K-Prep administration were not sick, and that all students who took the regular K-Prep also took the Make-up K-Prep.

Counting tests in US education has been this way for at least a quarter-century. Those prone to do so goose the numbers any way they plausibly can. A test is given in grade 5 on Tuesday? Count all students in the school as being tested. A DIBELS test takes all of one minute to administer? Count a full class period as lost. A 3-hour ACT has five sub-sections? That counts as five tests. Only a small percentage of schools in the district are sampled to take the National Assessment of Educational Progress in one or two grades? Count all students in all grades in the district as being tested, and count all the subjects tested individually.

Critics have gotten away with this fibbing for so long it has become routine–the standard way to count the amount of testing. And, reporters tend to pass it along as fact.

Richard P. Phelps

Posted in College prep, Education policy, K-12, Richard P. Phelps, Testing/Assessment | Tagged , , , | Comments Off on Overtesting or Overcounting?

Kamenetz, A. (2015). The Test: Why our schools are obsessed with standardized testing—but you don’t have to be. New York: Public Affairs. Book Review, by Richard P. Phelps

Perhaps it is because I avoid most tabloid journalism that I found journalist Anya Kamenetz’s loose cannon Introduction to The Test: Why our schools are obsessed with standardized testing—but you don’t have to be so jarring. In the space of seven pages, she employs the pejoratives “test obsession”, “test score obsession”, “testing obsession”, “insidious … test creep”, “testing mania”, “endless measurement”, “testing arms race”, “high-stakes madness”, “obsession with metrics”, and “test-obsessed culture”.

Those un-measured words fit tightly alongside assertions that education, or standardized, or high-stakes testing is responsible for numerous harms ranging from stomachaches, stunted spirits, family stress, “undermined” schools, demoralized teachers, and paralyzed public debate, to the Great Recession (pp. 1, 6, 7), which was initially sparked by problems with mortgage-backed financial securities (and parents choose home locations in part based on school average test scores). Oh, and tests are “gutting our country’s future competitiveness,” too (p. 1).

Kamenetz made almost no effort to search for counter evidence[1]: “there’s lots of evidence that these tests are doing harm, and very little in their favor” (p. 13). Among her several sources for information of the relevant research literature are arguably the country’s most prolific proponents of the notion that little to no research exists showing educational benefits to testing.[2] Ergo, why bother to look for it?

Had a journalist covered the legendary feud between the Hatfield and McCoy families, and talked only to the Hatfields, one might expect a surplus of reportage favoring the Hatfields and disfavoring the McCoys, and a deficit of reportage favoring the McCoys and disfavoring the Hatfields.

Looking at tests from any angle, Kamenetz sees only evil. Tests are bad because tests were used to enforce Jim Crow discrimination (p. 63). Tests are bad because some of the first scientists to use intelligence tests were racists (pp. 40-43).

Tests are bad because they employ the statistical tools of latent trait theory and factor analysis—as tens of thousands of social scientists worldwide currently do—but the “eminent paleontologist” Stephen J. Gould doesn’t like them (pp. 46-48). (He argued that if you cannot measure something directly, it doesn’t really exist.) And, by the way, did you know that some of the early 20th-century scientists of intelligence testing were racists? (pp. 48-57)

Tests are bad because of Campbell’s Law: “when a measure becomes a target, it ceases to be a good measure” (p. 5). Such a criticism, if valid, could be used to condemn any measure used evaluatively in any of society’s realm. Forget health and medical studies, sports statistics, Department of Agriculture food monitoring protocols, ratings by Consumers Reports’, Angie’s List, the Food and Drug Administration. None are “good measures” because they are all targets.

Tests are bad because they are “controlled by a handful of companies” (pp. 5, 81), “The testing company determines the quality of teachers’ performance.” (p. 20), and “tests shift control and authority into the hands of the unregulated testing industry” (p. 75). Such criticisms, if valid, could be used to justify nationalizing all businesses in industries with high scale economies (e.g., there are only four big national wireless telephone companies, so perhaps the federal government should take over), and outlaw all government contracting. Most of our country’s roads and bridges, for example, are built by private construction firms under contract to local, state, and national government agencies to their specifications, just like most standardized tests; but who believes that those firms control our roads?

Kamenetz swallows education anti-testing dogma whole. She claims that multiple-choice items can only test recall and basic skills (p. 35), that students learn nothing while they are taking tests (p. 15), and that US students are tested more than any others (pp. 15-17, 75)—and they are if you count the way her information sources do—counting at minimum an entire class period for each test administration, even a one-minute DIBELS test; counting all students in all grades of a school as taking a test whenever any students in any grade are taking a test; counting all subtests independently in the US (e.g., each ACT counts as five because it has five subtests) but only the whole tests for other countries; etc.

Standardized testing absorbs way too much money and time, according to Kamenetz. Later in the book, however, she recommends an alternative education universe of fuzzy assessments that, if enacted, would absorb far more time and money.

What are her solutions to the insidious obsessive mania of testing? There is some Rousseau-an fantasizing—all school should be like her daughter’s happy pre-school where each student learned at his or her own pace (pp. 3-4) and the school’s job was “customizing learning to each student” (p. 8).

Some of the book’s latter half is devoted to “innovative” (of course) solutions that are not quite as innovative as she seems to believe. She is National Public Radio’s “lead digital education reporter” so some interesting new and recent technologies suffuse the recommendations. But, even jazzing up the context, format, and delivery mechanisms with the latest whiz-bang gizmos will not eliminate the problems inherent in her old-new solutions: performance testing, simulations, demonstrations, portfolios, and the like. Like so many Common Core Standards boosters advocating the same “innovations”, she seems unaware that they have been tried in the past, with disastrous results.[3]

As I do not know Ms. Kamenetz personally, I must assume that she is sincere in her beliefs and made her own decisions about what to write. But, if she had naively allowed herself to be wholly misled by those with a vested interest in education establishment doctrine, the end result would have been no different.

The book is a lazily slapped-together rant, unworthy of a journalist. Ironically, however, I agree with Kamenetz on many issues. Like her, I do not much like the assessment components of the old No Child Left Behind Act or the new Common Core Standards. But, my solution would be to repeal both programs, not eliminate standardized testing. Like her, I oppose the US practice of relying on a single proficiency standard for all students (pp. 5, 36). But, my solution would be to employ multiple targets, as most other countries do. She would dump the tests.

Like Kamenetz, I believe it unproductive to devote more than a smidgen of time (at most half a day) to test preparation with test forms and item formats that are separate from subject matter learning. And, like her (p. 194), I am convinced that it does more harm than good. But, she blames the tests and the testing companies for the abomination; in fact, the testing companies prominently and frequently discourage the practice. It is the same testing opponents she has chosen to trust who claim that it works. It serves their argument to claim that non-subject-matter-related test preparation works because, if it were true, it would demonstrate that tests can be gamed with tricks and are invalid measurement instruments.

Like her, I oppose firing teachers based on student test scores, as current value-added measurement (VAM) systems do while there are no consequences for the students. I believe it wrong because too few data points are used and because student effort in such conditions is not reliable, varying by age, gender, socio-economic level, and more. But, I would eliminate the VAM program, or drastically revise it; she would eliminate the tests.

Like Kamenetz, I believe that educators’ cheating on tests is unacceptable, far more common than is publicly known, and should be stopped. I say, stop the cheating. She says, dump the tests.

It defies common sense to have teachers administering high-stakes tests in their own classrooms. Rotating test administration assignments so that teachers do not proctor their own students is easy. Rotating assignments further so that every testing room is proctored by at least two adults is easy, too. So, why aren’t these and other astonishingly easy fixes to test security problems implemented? Note that the education professionals responsible for managing test administrations are often the same who complain that testing is impossibly unfair.

The sensible solution is to take test administration management out of the hands of those who may welcome test administration fiascos, and hire independent professionals with no conflict of interest. But, like many education insiders, Kamenetz would ban the testing; thereby rewarding those who have mismanaged test administrations, sometimes deliberately, with a vacation from reliable external evaluation.

If she were correct on all these issues—that the testing is the problem in each case—shouldn’t we also eliminate examinations for doctors, lawyers, nurses, and pharmacists (all of which rely overwhelmingly on the multiple-choice format, by the way)?

Our country has a problem. More than in most other countries, our public education system is independent, self-contained, and self-renewing. Education professionals staffing school districts make the hiring, purchasing, and school catchment-area boundary-line decisions. School district boundaries often differ from those of other governmental jurisdictions, confusing the electorate. In many jurisdictions, school officials set the dates for votes on bond issues or school board elections, and can do so to their advantage. Those school officials are trained, and socialized, in graduate schools of education.

A half century ago, most faculty in graduate schools of education may have received their own professional training in core disciplines, such as Psychology, Sociology, or Business Management. Today, most education school faculty are themselves education school graduates, socialized in the prevailing culture. The dominant expertise in schools of education can maintain its dominance by hiring faculty who agree with it and denying tenure to those who stray. The dominant expertise in education journals can control education knowledge by accepting article submissions with agreeable results and rejecting those without.

Even most testing and measurement PhD training programs now reside in education schools, inside the same cultural cocoon.

Standardized testing is one of the few remaining independent tools US society has for holding education professionals accountable to serve the public, and not their own, interests. Without valid, reliable, objective external measurement, education professionals can do what they please inside our schools, with our children and our money. When educators are the only arbiters of the quality of their own work, they tend to rate it consistently well.

A substantial portion of The Test’s girth is filled with complaints that tests do not measure most of what students are supposed to or should learn: “It’s math and reading skills, history and science facts that kids are tested and graded on. Emotional, social, moral, spiritual, creative, and physical development all become marginal…” (p. 4). She quotes Daniel Koretz: “These tests can measure only a subset of the goals of education” (p. 14). Several other testing critics are cited making similar claims.

Yet, standards-based tests are developed in a process that takes years, and involves scores of legislators, parents, teachers, and administrators on a variety of decision-making committees. The citizens of a jurisdiction and their representatives choose the content of standards-based tests. They could choose content that Kamenetz and the several other critics she cites prefer, but they don’t.

If the critics are unhappy with test content, they should take their case to the proper authorities, voice their complaints at tedious standards commission hearings, and contribute their time to the rather monotonous work of test framework review committees. I sense that none of that patient effort interests them; instead, they would prefer that all decision-making power be granted to them, ex cathedra, to do as they think best for us.

Moreover, I find some of their assertions about what should be studied and tested rather scary. Our public schools should teach our children emotions, morals, and spirituality?

Likely that prospect would scare most parents, too. But, many parents’ first reaction to a proposal that our schools be allowed to teach their children everything might instead be something like: first show us that you can teach our children to read, write, and compute, then we can discuss further responsibilities.

So long as education insiders insist that we must hand over our money and children and leave them alone to determine—and evaluate—what they do with both, calls for “imploding” the public education system will only grow louder, as they should.

It is bad enough that so many education professors write propaganda, call it research, and deliberately mislead journalists by declaring an absence of countervailing research and researchers. Researchers confident in their arguments and evidence should be unafraid to face opponents and opposing ideas. The researchers Kamenetz trusts do all they can to deny dissenters a hearing.

Another potential independent tool for holding education professionals accountable, in addition to testing, could be an active, skeptical, and inquiring press knowledgeable of education issues and conflicts of interests. Other countries have it. Why are so many US education reporters gullible sycophants?



[1] She did speak with Samuel Casey Carter, the author of No Excuses: Lessons from 21 High-Performing High-Poverty Schools (2000) (pp. 81-84), but chides him for recommending frequent testing without “framing” “the racist origins of standardized testing.” Kamenetz suggests that test scores are almost completely determined by household wealth and dismisses Carter’s explanations as a “mishmash of anecdotal evidence and conservative faith.”

[2] Those sources are Daniel Koretz, Brian Jacob, and the “FairTest” crew. In fact, an enormous research literature revealing large benefits from standardized, high-stakes, and frequent education testing spans a century (Brown, Roediger, & McDaniel, 2014; Larsen & Butler, 2013; Phelps, 2012).

[3] The 1990s witnessed the chaos of the New Standards Project, MSPAP (Maryland), CLAS (California) and KIRIS (Kentucky), dysfunctional programs that, when implemented, were overwhelmingly rejected by citizens, politicians and measurement professionals alike. (Incidentally, some of the same masterminds behind those projects have resurfaced as lead writers for the Common Core Standards.)



Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick: The science of successful learning. Cambridge, MA: Belknap Press.

Larsen, D. P., & Butler, A. C. (2013). Test-enhanced learning. In K. Walsh (Ed.), Oxford textbook of medical education (pp. 443–452). Oxford: Oxford University Press.

Phelps, R. P. (2012). The effect of testing on student achievement, 1910–2010. International Journal of Testing, 12(1), 21–43.


Posted in College prep, Education policy, K-12, Richard P. Phelps, Testing/Assessment | Tagged , , , , , | Comments Off on Kamenetz, A. (2015). The Test: Why our schools are obsessed with standardized testing—but you don’t have to be. New York: Public Affairs. Book Review, by Richard P. Phelps

Richard Innes’ Georgia testimony on Common Core

Testimony to Georgia House’s Federal Government’s Role in Education Study Committee Regarding: Common Core State Standards and Related Testing Issues

New in the Nonpartisan Education Review: “ Testimony to Georgia House’s Federal Government’s Role in Education Study Committee Regarding: Common Core State Standards and Related Testing Issues ”, by Richard Innes.

Posted in College prep, Education policy, K-12, Mathematics | Tagged | Comments Off
Posted in College prep, Education policy, K-12, Richard Innes, Testing/Assessment | Tagged , , , , , , , | Comments Off on Richard Innes’ Georgia testimony on Common Core

Common Sense Approach to Common Core Math

I’m writing a series of articles for Heartlander on a Common Sense Approach to Common Core. In these articles, I show that there are interpretations of CC that can be made that aren’t as ridiculous as the ones we’re seeing on YouTube and other venues. 
Barry Garelick
Posted in Barry Garelick, K-12, Mathematics | Comments Off on Common Sense Approach to Common Core Math

Undoing the “Rote Understanding” Approach to the Common Core Math Standards

What has been used as a help in older textbooks and in Singapore, is turning out to be a hindrance in the U.S. under the current interpretations of Common Core. Insisting on calculations based on the “making tens” and other approaches are in my opinion not likely to prove useful for all first graders. Teachers should be free to differentiate instruction so that those students who are able to use these strategies can achieve those goals. It is unrealistic and potentially destructive to interpret the Common Core math standards as requiring that all first grade students use these strategies in the name of “understanding”. That should be the real objection voiced to demonstrations of this method under Common Core—not the method itself.

Read all about it here.

Posted in Barry Garelick, Education policy, K-12, Mathematics | Comments Off on Undoing the “Rote Understanding” Approach to the Common Core Math Standards

Press Release: Study Finds Common Core Math Standards Will Reduce Enrollment in High-Level High School Math Courses, Dumb Down College STEM Curriculum Lower standards, alignment of SAT to Common Core likely to hurt low-income students the most

BOSTON – Common Core math standards (CCMS) end after just a partial Algebra II course. This weak Algebra II course will result in fewer high school students able to study higher-level math and science courses and an increase in credit-bearing college courses that are at the level of seventh and eighth grade material in high-achieving countries, according to a new study published by Pioneer Institute.

The framers of Common Core claimed the standards would be anchored to higher education requirements, then back-mapped through upper and lower grades. But Richard P. Phelps and R. James Milgram, authors of “The Revenge of K-12: How Common Core and the New SAT Lower College Standards in the U.S.,” find that higher education was scarcely involved with creating the standards.

“The only higher education involvement was from institutions that agreed to place any students who pass Common Core-based tests in high school into credit-bearing college courses,” said Phelps. “The guarantee came in return for states’ hoped-for receipt of federal ‘Race to the Top’ grant funding.”

“Many students will fail those courses – until they’re watered down,” he added.

Perhaps the greatest harm to higher education will come from the College Board’s decision to align its SAT tests with Common Core. The SAT has historically been an aptitude test – one designed to predict college success. But the new test would become an achievement test – a retrospective assessment designed to measure mastery of high school material. Many high-achieving countries administer a retrospective test for high school graduation and a predictive college entrance examination.

The new test will also be less useful to college admissions officers, since information gained from the retrospective test will duplicate data they already have, such as grade point average and class rank. David Coleman, the lead author of Common Core’s English language arts standards, is now president of the College Board and announced the decision to align the SAT tests with Common Core when he became president.

The change in the nature of the SAT will be most harmful to low-income students. An achievement test is far less useful as a vehicle for identifying students with high science, technology, engineering, and math (STEM) potential who attended high schools with poor math and science instruction.

Retrospective tests are also more susceptible to coaching, which provides another advantage to students from families who can afford test preparation courses.

Low-income students will also be hurt the most by the shift to weaker math standards. Since the Common Core math standards only end at a partial Algebra II course, nothing higher than Algebra II will be tested by federally funded assessments that are currently under development. High schools in low-income areas will be under the greatest fiscal pressure to eliminate under-subscribed electives like trigonometry, pre-calculus, and calculus.

Research has shown that the highest-level math course taken in high school is the single best predictor of college success. Only 39 percent of the members of the class of 1992 who entered college having taken no farther than Algebra II earned a college degree. The authors estimate that the number will shrink to 31-33 percent for the class of 2012.

Two of the authors of the Common Core math standards, Jason Zimba and William McCallum, have publicly acknowledged the standards’ weakness. At a public meeting in Massachusetts in 2010, Zimba said the CCMS is “not for STEM” and “not for selective colleges.”

Indeed, among students intending to major in STEM fields, just 2 percent of those whose first college math course is pre-calculus or lower ever graduate with a STEM degree. Proponents claim the Common Core standards are internationally benchmarked, but compulsory standards for the lower secondary grades in China are more advanced than any CCMS material.

The highest-achieving countries have standards for different pathways based on curricular preferences, goals and levels of achievement, and each pathway has its own exit examination.”A one-size-fits-all academic achievement target must of necessity be low,” Milgram said. “Otherwise politically unacceptable numbers of students will fail.”

Richard P. Phelps is editor or author of four books – Correcting Fallacies about Educational and Psychological Testing (APA, 2008/2009);Standardized Testing Primer (Peter Lang, 2007); Defending Standardized Testing (Psychology Press, 2005); and Kill the Messenger (Transaction, 2003, 2005)-and founder of the Nonpartisan Education Review.

R. James Milgram is professor of mathematics emeritus, Stanford University. He was a member of Common Core’s Validation Committee 2009-2010. Aside from writing and editing a large number of graduate level books on research level mathematics, he has also served on the NASA Advisory Board – the only mathematician to have ever served on this board, and has held a number of the most prestigious professorships in the world, including the Gauss Professorship in Germany.

Posted in College prep, Education policy, K-12, Mathematics | Tagged , , | Comments Off on Press Release: Study Finds Common Core Math Standards Will Reduce Enrollment in High-Level High School Math Courses, Dumb Down College STEM Curriculum Lower standards, alignment of SAT to Common Core likely to hurt low-income students the most

Wayne Bishop’s Response to Ratner and Wu (Wall Street Journal)

Making Math Education Even Worse, by Marina Ratner,

Dear Hung-Hsi,

It pains me to write but in spite of all of your precollegiate mathematics education knowledge and contributions, Prof. Ratner got it right and you “missed the boat” in response:
The CA Math Content Standards were – and still are – the best in the country. They have problems; e.g., there is too much specialized focus in its thread on Statistics, Data Analysis, and Probability and, even worse, Mathematical Reasoning. No sensible person can be against mathematical reasoning, of course, but that is exactly the point. Sensible people embed it everywhere and, as a standalone item, it becomes almost meaningless – hence the paucity (as in none) of CA Key Standards in that category. The writers included it to help ensure Board of Ed approval because most professional math educators were strongly objecting to the entire Stanford approach. Perhaps the most egregious, is your characterization of California’s problems using poison words: “rote-learning of linear equations by not preparing students for the correct definition of slope.” This is at best misleading and closer to being flat wrong:
From the introduction to Grade 7:
“They graph linear functions and understand the idea of slope and its relation to ratio.”
This is followed specifically with two Key Standards and examples:
3.3 Graph linear functions, noting that the vertical change (change in y-value) per unit of horizontal change (change in x-value) is always the same and know that the ratio (“rise over run”) is called the slope of a graph.
3.4 Plot the values of quantities whose ratios are always the same (e.g., cost to the number of an item, feet to inches, circumference to diameter of a circle). Fit a line to the plot and understand that the slope of the line equals the ratio of the quantities.
In what way(s) do you find the relevant 8th grade standard in the CCSS-M, Expressions and Equations (EE.8 #5,6), to be conceptually superior? (The word is used once in the intro to Grade 7 but it is not mentioned thereafter.) Formally proving that all pairs of distinct points determine similar triangles so that this ratio is well-defined would be mathematically necessary to be completely logical but I doubt if that’s what you meant particularly since traditional proof has been downplayed so badly even in the high school CCSS-M, much less 8th grade, especially in comparison with the CA Math Content Standards.

Regarding the general concept of competent Algebra 1 (not some pretense thereof), it was, it is, and it will remain standard in 8th grade (if not already accomplished in 7th grade) for self-respecting, academically-oriented private schools. As you well know, the Stanford Math group who wrote the CA Standards started with the egalitarian notion that this should be an opportunity for everyone including those who do not have access to such schools. It cannot be and was not intended to be just imposed that traditional Algebra 1 be the math course for all 8th graders but the group worked backwards from that target step-by-step through the grades in order to get there comfortably (such as developing the concept of slope in 7th grade that you appear to have missed). Is every detail spelled out? Of course not, nor should they be, but the key ideas – even set off as Key Standards – are there and presented considerably more clearly than in the CCSS-M.

There is statistical evidence that the goal did improve the state of mathematics competence in California, but we both know the CA Math Content Standards fell well short of the ideal. It was not – as your words could be interpreted to imply – that they reflect an inherent lack of development of student understanding. The primary villain is the overwhelming mandate for chronological grade placement (age-5) for incoming students and almost universal social promotion. Far too many students are not competent with the standards at their grade levels – sometimes years below – yet they move on anyway. Algebra in 8th grade – Algebra in 11th grade or even Algebra in college – is not realistic for all but truly gifted students who lack easily identifiable mathematics antecedents. A less common problem, but damaging to our most talented students, is the reverse situation. Advancement in grade level (as was done with my son at his private school and now chair of Chemistry and Biochemistry at Amherst College) is almost unheard of. Although mandated by many districts, and underscored by the API scoring of schools, mandating that all students be in an honest Algebra class in 8th grade without a reasonable level of competence with the Standards of earlier grades was never the intention. It was to be the opportunity, not the mandate.

“Moreover, Common Core does not place a ceiling on achievement. What the standards do provide are key stepping stones to higher-level math such as trigonometry, calculus and beyond.”

Although these words are regularly repeated, reality is the diametric opposite. Across California, CPM (supposedly, College Preparatory Mathematics) is back with a vengeance. Ironically, it was the very catalyst that spawned the now defunct Mathematically Correct and it pulled its submission to California from the 2001 approval process rather than be rejected by our CRP (Content Review Panel). You’ll recall that it and San Francisco State’s IMP were among the federally blessed “Exemplary” programs for which the only mathematician, UT-SA’s Manuel P. Berriozábal, refused to sign off. Weren’t you among the signatories of David Klein’s full-page letter of objection in the Washington Post? One of CPM’s long-standing goals is to have ALL assessments – even final examinations – done collectively with one’s assigned group. It makes for a wonderful ruse – all students can appear to be meeting the “standards” of the course (even if absent!) – while deeply frustrating those students who are “getting it” (often with direct instruction by some family member who knows the subject). Trigonometry, calculus, and beyond from any of CPM, IMP, Core-Plus (all self-blessed as CCSS-M compatible)? It just doesn’t happen. However, from the homepage of Core-Plus:

“The new Common Core State Standards (CCSS) edition of Core-Plus Mathematics builds on the strengths of previous editions that were cited as Exemplary by the U.S. Department of Education Expert Panel on Mathematics and Science”

What did happen – may already be happening again? Beneath the horizon, schools began to offer a traditional alternative to provide an opportunity for adequate preparation for knowledgeable students with math-based career aspirations. What also happened (but may not be successful this time because of the SBAC or PARCC state examinations?) was that other students and their parents petitioned their Boards of Education for an elective choice and, if unfettered choice was granted, the death knell sounded on the innovative “deeper understanding” curriculum and pedagogy.

Finally, you do acknowledge the ridiculous nature of the 6th grade “picture-drawing frenzy” observed by Prof. Ratner but seem to imply it was an isolated incident instead of her description, “this model-drawing mania went on in my grandson’s class for the entire year.” The fact is that such mis-interpretations of “teaching for deeper understanding” are going on for entire years in classrooms – in entire districts – all across the country; they are even taught by professional math educators as mandated by Common Core. You described her observation as a “failure to properly implement Common Core” and I am sure that you believe that to be the case but your conviction is belied by the fact that one of the three primary writers of the CCSS-M and the head of the SBAC-M is Phil Daro (bachelors degree in English Lit). Phil Daro has been strongly influential in precollegiate mathematics education – curricula and pedagogy – across California for decades, my first working acquaintance with him was in 1988, months prior to the first NCTM Standards. His vision for the “right” way to conduct mathematics classrooms (not “to teach”) helped lead to the 1992 CA Math Framework, MathLand-type curricula, and the ensuing California battles of the Math Wars with our temporary respite beginning in late 1997. Unfortunately, his vision is not only reinvigorated here in California, it is now a huge national problem and Prof. Ratner “nailed it”.

Wayne Bishop

Posted in Common Core, Education policy, K-12, math, Mathematics, Wayne Bishop | Tagged , | Comments Off on Wayne Bishop’s Response to Ratner and Wu (Wall Street Journal)