Students Last

Will Fitzhugh

The Concord Review
6 April 2017
The great social psychiatrist Harry Stack Sullivan wrote that the principal problem with communication is that we think we express meaning to others, when in fact we evoke it.
That is, what we say brings a response in the listener which involves their current thoughts at the the time, their feelings, wishes, goals and other preoccupations, all of which affect and alter the meanings of our expression as they hear it.
Psychiatrists are carefully trained to be useful in that situation. They learn to listen. When they do listen, they can derive an understanding of at least some of the ways in which the thoughts of their patients have responded to what was said. They can find out how the patient’s own experiences, thoughts and concerns have interacted with what the psychiatrist said, and this can help the doctor shape what they say next in perhaps more pertinent and more useful ways.
When I was a high school History teacher I was not a bad person, but I almost never shut up in class. If the teacher talks, that can make life easier for students, because they can continue giving their attention to whatever they were thinking about at the time, and if the teacher pauses, most students can easily ask a question to get the teacher talking again if they seem to be slowing down.
Most high school History teachers are not bad people, but they usually feel they have an obligation to talk, present, excite, inspire, demonstrate material and in other ways fill up the time of students in their classes. Some of the best teachers do ask questions, but even they believe they can’t spend too much time on student answers, not to mention on what students are actually thinking about what the teacher has said, or, if other students talk, about what they have said.
This is much less the case in some special secondary schools, like Phillips Exeter, which have small classes meeting around a table as a seminar, specifically designed to gather the comments and thoughts of students about academic subjects. But for public school teachers with five classes of 30 students each, that kind of dialogue is not an option.
Unless they fall silent, high school History teachers almost never have any idea what their students are thinking, and students come to understand that, at least in most classrooms, what is on their minds is of little importance to the process. This doesn’t mean that they don’t learn anything in their History classes. Some teachers really are well-educated, full of good stories, fascinating speakers, and fun to be with. That does not change the fact that even those best teachers have very little idea of what students are actually thinking about the History which is offered to them.
Some teachers do assign short papers, and if the students can choose the topics themselves, and if teachers have the time to read those papers, they can learn more about what some part of History means to their students. Sad to say, the assignment of serious History research papers is declining in this country, with some students working on slide presentations or videos, but many fewer students writing Extended Essays in History.
Education reform pundits all agree that the most important variable in student academic achievement is teacher quality, because what teachers do is the lowest level of educational activity of which they are able to take notice. In fact, the most important variable in student academic achievement is student academic work. Students learn the most from the academic work that they do, but this factor escapes the notice of the majority of education professors, theorists, reporters and other thought leaders. 
Since 1987, The Concord Review has published 1,241 exemplary History research papers [average 7,000 words, with endnotes and bibliography] by secondary students from 44 states and 40 other countries []. These papers are on a vary large variety of historical topics, ancient and modern, domestic and foreign, but all of them show what students are actually thinking as they take History seriously. If more teachers of History would read a few of these strong research papers, they would become more aware, first, that some high school History students actually can think about History, and second, that such student writing, based on extensive reading of History, demonstrates a level of sophistication in their understanding of History that can never be discovered in classes where teachers do all the talking. 
Great teachers of History should continue to talk the way they do in classes, and their students will learn a lot. But the actual thoughts of students of History should have a place for their expression as well. Students whose work is published in The Concord Review not only benefit from the hard work they have done, they also come to have greater respect for their own achievement and potential as scholars of History.

“Teach with Examples”
Will Fitzhugh [Founder],
The Concord Review [1987]

National Writing Board [1998]
TCR Academic Coaches [2014]

TCR Summer Program [2014]
730 Boston Post Road, Suite 24
Sudbury, Massachusetts 01776-3371 USA
Varsity Academics®
Posted in Education Reform, History, Humanities, K-12, Social Studies, Will Fitzhugh | Tagged , , , , , , | Leave a comment

Another post-Inaugural Change: The calendar

Here in DC, the nation’s capital, which has enjoyed Home Rule since 1974,
but remains ultimately under the thumb of Congress and the President (thanks
to Art. I, Section 8 of the Constitution), one never knows what surprise
awaits each new day. These days, one need not be addicted to social media
or even tea leaves to hypothesize what’s around the corner. Something as
politically innocuous as an “Alley Restoration” could be a harbinger of
things to come.

A few weeks ago, as I turned the corner into my alley, I was struck by signs
announcing an “Alley Restoration” and a change in our calendar. Not since
Pope Gregory XIII or even Julius Caesar …… (except for the brief
anticlerical calendar of the French Revolution). Don’t blame Marion Barry;
he has passed to his reward.


Posted in Erich Martel | Tagged , , | Leave a comment

“Organizationally orchestrated propaganda” at ETS

With the testing opt-out movement growing in popularity in 2016, Common Core’s profiteers began to worry. Lower participation enough and the entire enterprise could be threatened: with meaningless aggregate scores; compromised test statistics vital to quality control; and a strong signal that many citizens no longer believe the Common Core sales pitch.

The Educational Testing Service (ETS) was established decades ago by the Carnegie Foundation to serve as an apolitical research laboratory for psychometric work. For a while, ETS played that role well, producing some of the world’s highest-quality, most objective measurement research.

In fits and starts over the past quarter century, however, ETS has commercialized. At this point, there should be no doubt in anyone’s mind that ETS is a business–a business that relies on contracts and a business that aims to please those who can pay for its services.

Some would argue, with some justification, that ETS had no choice but to change with the times. Formerly guaranteed contracts were no longer guaranteed, and the organization needed either to pay its researchers or let them go.

Instead of now presenting itself honestly to the public as a commercial enterprise seeking profits, however, ETS continues to prominently display the trappings of a neutral research laboratory seeking truths. Top employees are awarded lofty academic titles and research “chairs”. Whether the awards derive from good research work or success in courting new business is open to question.

I perceive that ETS at least attempts something like an even split between valid research and faux-research pandering. The awarding of ETS’s most prestigious honor bestowed upon outsiders–the Angoff Award–for example, takes turns between psychometricians conducting high-quality, non-political technical work one year, and high-profile gatekeepers conducting highly suspicious research the next. Members of the latter group can be found participating in, or awarding, ETS commercial contracts.

With their “research” on the Common Core test opt-out movement, ETS blew away any credible pretense that it conducts objective research where its profits are threatened. Opt-out leaders are portrayed by ETS as simple-minded, misinformed, parents of poor students, …you name it. And, of course, they are protesting against “innovative, rigorous, high quality” tests they are too dumb to appreciate.

Common Core testing, in case you didn’t know and haven’t guessed from that written above, represents a substantial share of ETS’s work. Pearson holds the largest share of work for the PARCC exams, but ETS holds the second largest.

The most ethical way for ETS to have handled the issue of Common Core opt-outs would have been to say nothing. After all, it is, supposedly, a research laboratory of apolitical test developers. They are statistical experts at developing assessment instruments, not at citizen movements, education administration, or public behavior.

Choosing to disregard the most ethical choice, ETS could have at least made it abundantly clear that it retains a large self-interest in the success of PARCC testing.

Instead, ETS continues to wrap itself in its old research laboratory coat and condemns opt-out movement leaders and sympathizers as ignorant and ill-motivated. Never mind that the opt-out leaders receive not a dime for their efforts, and ETS’s celebrity researchers are remunerated abundantly for communicating the company line.

Four months ago, I responded to one of these ETS anti-opt-out propaganda pieces, written by Randy E. Bennett, the “Norman O. Frederiksen Chair in Assessment Innovation at Educational Testing Service.” It took a few weeks, but ETS, in the person of Mr. Bennett, responded to my comment questioning ETS’s objectivity in the matter.

He asserted, “There’s a lot less organizationally orchestrated propaganda, and a lot more academic freedom, here than you might think!”

To which I replied, “The many psychometricians working at ETS with a starkly different vision of what constitutes “high quality” in assessment are allowed to publish purely technical pieces. But, IMHO, the PR road show predominantly panders to power and profit. ETS’s former reputation for scholarly integrity took decades to accumulate. To my observation, it seems to be taking less time to dissemble. RP”

My return comment, however, was blocked. All comments have now been removed from the relevant ETS web page. All comments remain available to read at the Disqus comments manager site, though. The vertical orange bar next to the Nonpartisan Education logo is Disqus’ indication that the comment was blocked by ETS at its web site.


Posted in Censorship, College prep, Common Core, Education policy, Ethics, information suppression, Richard P. Phelps | Tagged , , , , | Leave a comment

Martin Luther King’s non-violence: Personal belief or strategy or both?

On a day when we remember Martin Luther King, I want to share a personal perspective on his advocacy of non-violence. When the wisdom of a great person is invoked, omission of the context that gave it meaning demeans the person and distorts his/her message.

The origin of this reflection:

Shortly after 911, the teacher of the “Alternatives to Violence” class at Washington, DC’s Wilson HS, a DC Public School, invited a number of speakers opposed to the US military response to share their views with students and interested teachers.

Some also criticized the SAT as “racist,” “since poor black children shouldn’t be expected to know vocabulary words like ‘yacht’ they don’t hear at home.”*

Several spoke about the “AIDS conspiracy,” but were curiously silent about South African President Thabo Mbeki’s pseudo-scientific theories of its origins, the basis of his opposition to preventive health measures. No mention was made of the protests that Mbeki’s measures provoked.

One speaker, who had attended the recent UN World Conference Against Racism in Durban, South Africa, decried the fact that English was one of the conference’s eight “privileged” official languages. He was clearly oblivious to the fact that the Soweto uprising, which reignited the movement that culminated in the end of apartheid, began when Soweto high school students demanded the right to be taught in English, the language of Martin Luther King and Malcolm X, whose speeches on smuggled tapes were the target of the apartheid government’s thinly disguised censorship plan of making Afrikaans the language of instruction. (In 1979, when I was at Cardozo HS, with the backing of the Washington Teachers’ Union and its president, William Simons, I helped to organize a speaking tour of DC high schools for Soweto student leader Tsietsi Mashinini).

Speakers made frequent references to Martin Luther King Jr and his advocacy of non-violence, attempting to tie it to each of these issues. A few months later, on the occasion of his birthday, students and teachers were invited to a speak-out on non-violence and the war in Afghanistan (this was a year before the Iraq invasion). I gave the following talk.

– – – – – – – – – – –


From a pay phone somewhere in the Negro neighborhood of Selma, Alabama, the scratchy sound of my friend Walter’s insistent voice stirred me. Following the Battle at Selma Bridge and the cowardly murder of Jim Reeb, the ex-minister of DC’s All Souls Unitarian Church, our safe classrooms overlooking the Potomac couldn’t keep him north. His picture had just appeared in Time Magazine, backed against a wall, dodging Sheriff Jim Clark’s trademark white cane, swung from horseback with punishing effect. I could think of no reason to stay away.

It was March 1965.
Two days later, my ’51 Merc was one of several carloads that ended up in Montgomery, Alabama, home of the Confederacy and George Wallace, its current symbol of defiance. We were among the many drawn to the last half dozen or so miles of that great and swelling march, where Martin Luther King, speaking on the capital steps, called upon the U.S. Congress to enact the stalled Voting Rights Bill. The marching, the singing and the exhilaration of a common bond of purpose forged indelible memories that gave life meaning and direction:

– Packed like sardines on the floor of Mr. Ziegler’s modern brick house on a dusty street in the “colored section” that the city fathers saw no need to pave;

– An old woman pressing a few hard earned dimes and nickels into my confused and hesitant hands, blessing me for coming to her city for a day that, for too long, lived only in hope and faith;

– Swaying to the low of “We Shall Overcome,” sung with a spiritual intensity that only long-awaited justice can evoke;

– The lasting images of the long line of marchers (my first of many) winding along the highway into Montgomery, especially noticeable for the religious diversity visible in religious garb: Priests and ministers wearing the Roman collar, nuns in their habits, men wearing the kippah (then more commonly called a yarmulke) and, as seen in the movie, a robed Eastern Orthodox prelate with cross and scepter, and the blue jean overalls favored by many of the young civil rights workers. Along the side of the road, again as in the movie, African-American children and older adults not joining in, but showing their support by smiling and waving at us.

– The prickly sensation of fear, when a jackbooted motorcycle cop, spotting my illegal left turn, pulled me over, New Jersey license plates, unimpeachable evidence of my sin as “outside agitator”:

“You boys comin’ from thuh ralllihh?”
“No, sir; we’re on our way back from Spring Break in New Orleans,”** were the timid words of discretion I heard myself speak.
“Youuu broke thuh law back a ways with that ill-legal left turn, an offense against thuh laws of Mon’gom’ry, Alibammuh. Youuu will folluh me to the cawthouse. Heahhh?”
“Yes, sir.”

With barely $20 between us, images of jail cells and the three recently murdered civil rights workers flashed through my mind.

I don’t really remember Martin Luther King’s speech; oh, something about voting rights and the governor’s refusal to protect the marchers. More meaningful than those forgotten words was his gift to me and countless others: A welcome into that great movement for justice and into the arms of humanity and the responsibilities that membership brings.

The power of that movement for justice and his accomplishments are misunderstood, if reduced to an oversimplified advocacy of non-violence. Understanding that it was simultaneously a strategy does not devalue his personal belief. From Thoreau’s writings on non-violent resistance to unjust laws to Gandhi’s practical application in India and the strategy workshops at Highlander Folk School (attended also by Rosa Parks), King’s vision was translated into Alabama reality by union veteran and NAACP leader E.D. Nixon. King’s vision and strategy were grounded on the confluence of evolving global changes and domestic realities that began with Brown vs. Board of Education in 1954, the irreparable fissure in America’s Berlin Wall of legalized segregation.

Like Gandhi, George Washington and even Ho Chi Minh, King understood that those who appeared to benefit from privilege were no monolith. The movement for justice could win support not only from those under the heel of Jim Crow but also from those on the other side of the color line, capable of rejecting a “just us” version of justice.
King also understood another reality that often discomforts those who favor social justice, but not when imposed by the Federal Government: Opponents often yield, not out of moral enlightenment, but when continued resistance seems futile. And, as long as resistance festers, it may reassert itself when it no longer seems futile.***

King understood the power of television. The brutal treatment of fellow Americans peacefully seeking to exercise constitutional rights long guaranteed on paper was witnessed daily in the nation’s living rooms and now became increasingly intolerable. The strategy of non-violence made nation and world witness to the real source of violence.

Then, too, the State Department had run out of red-faced explanations for the rude treatment and crude insults endured by African and Asian diplomats on Maryland’s Route 40 when driving between Washington embassies and UN offices in New York. As America competed with the Soviet Union for world leadership, the message of democracy and freedom increasingly stumbled on the hypocritical contrasts of those embarrassing facts.

Then there was that war in Viet Nam and Martin Luther King’s powerful sermon announcing his public opposition – and break with President Johnson, delivered at New York’s Riverside Cathedral on April 4th, 1967, a year to the day before violence born of hate stole his life.

But wait! Didn’t he receive the Nobel Peace Prize 3 years earlier – in 1964? And didn’t the U.S. troop escalation begin in March 1965, two full years before the Riverside sermon, by which time over 10,000 Americans and tens of thousands Vietnamese had been killed! Two years of public silence!! Where were the public condemnations from the apostle of non-violence? Was he a hypocrite? If so, why not just overlook that flaw whenever the sainted, now forever muted, icon of non-violence can be invoked for the final word!

For King, the commitment to civil rights and economic opportunity compelled him to choose between his personal revulsion against the violence of war and his reluctance to alienate the president who had signed two civil rights bills and funded a war on poverty – as well as that much bigger one in Vietnam. Was the resulting conflict between the non-violence of personal conviction and the strategy of non-violence that won political support against seemingly unmovable odds just another instance of the hypocrisy?

When his advocacy of non-violence is torn from the historical context that gave it life and then reduced to a rigid slogan or dogma, the lessons to be learned from the real human dilemma lose meaning and instructive value.

For that reason, we should treat with caution efforts to invoke his blessing on present-day [2002] controversies:

Would he have condemned the U.S. military response to 9/11?
Would he condemn the World Bank and International Monetary Fund?
Would he politely ignore South Africa President Thabo Mbeki’s pseudo-scientific AIDS fantasies?
Would he condemn SAT tests as racist?

Before rushing to offer a politically convenient answer, we should remember that, as a leader breaking new ground, he took responsibilities upon himself that made rigid adherence to doctrine or philosophy a luxury. Before invoking his blessing for some partisan cause, we should recall how easy it is to summon gods and icons to legitimize both human cruelty and human kindness.

Oh – some stories do end well. The fine for the moving violation on the streets of Montgomery: “City of Montgomery vs. Erich Martel: $3.00,” which, in 1965, was the price of 10 gallons of gas.

For Viola Liuzzo, however, a mother of five from Detroit who volunteered to drive marchers between Selma and Montgomery, a Klansman’s drive-by shotgun blast ended her life, joining her name to the countless many who paid the ultimate price in pursuit of justice.

— Erich Martel [originally written, January 15, 2002]


* Core knowledge advocate E.D. Hirsch has pointed out that the 1960’s Black Panther Party newspaper employed correct grammar and used words like “imperialism,” “capitalism,” etc., assuming that its target audience would know or learn terms and concepts they were unlikely to hear at home.
** In fact, a mere 10 days earlier, a bunch of us had driven to New Orleans for Mardi Gras, which is probably why that came so quickly to mind.
*** We now see that this has come to pass. After the U.S. Supreme Court’s 2013 Shelby County decision weakened the enforcement provisions of the Voting Rights Act, many state legislatures began to enact restrictive voting laws.

Posted in Erich Martel, Ethics | Tagged , , | Leave a comment

Significance of PISA math results

A new round of two international comparisons of student mathematics performance came out recently and there was a lot of interest because the reports were almost simultaneous, TIMSS[1] in late November 2016 and PISA[2] just a week later. They are often reported as 2015 instead of 2016 because the data collection for each was in late 2015 that would seem to improve the comparison even more. In fact, no comparison is appropriate; they are completely different instruments and, between them, the TIMSS is the one that should be of more concern to educators. Perhaps surprising and with great room for improvement, the US performance is not as dire as the PISA results would imply. By contrast, Finland continues to demonstrate that its internationally recognized record of PISA-proven success in mathematics education – with its widely applauded, student-friendly approach – is completely misinforming.

In spite of the popular press and mathematics education folklore, Finland’s performance has been known to be overrated since PISA first came out as documented by an open letter[3] written by the president of the Finnish Mathematical Society and cosigned by many mathematicians and experts in other math-based disciplines:

“The PISA survey tells only a partial truth of Finnish children’s mathematical skills” “in fact the mathematical knowledge of new students has declined dramatically”

This letter links to a description[4] of the most fundamental problem that directly involves elementary mathematics education:

“Severe shortcomings in Finnish mathematics skills” “If one does not know how to handle fractions, one is not able to know algebra”

The previous TIMSS had the 4th grade performance of Finland as a bit above that of the US but well behind by 8th. In the new report, it has slipped below the US at 4th and did not even submit itself to be assessed at 8th much less the Advanced level. Similar remarks apply to another country often recognized for its student-friendly mathematics education, the Netherlands, home of the PISA at the Freudenthal Institute. This decline was recognized in the TIMSS summary of student performance[1]with the comparative grade-level rankings as Exhibits 1.1 and 1.2 with the Advanced[5] as Exhibit M1.1:

pastedimageBy contrast, PISA[2] came out a week later and…

Netherlands 11
Finland 13
United States 41

Note: These include China* (just below Japan) of 3 provinces, not the country – if omitted, subtract 1.

Why the difference? The problem is that PISA was never for “school mathematics” but for all 15-year-old students in regard to their “mathematics literacy[6]”, not even mathematics at the algebra level needed for non-remedial admission to college much less the TIMSS Advanced level interpreted as AP or IB Calculus in the US:

“PISA is the U.S. source for internationally comparative information on the mathematical and scientific literacy of students in the upper grades at an age that, for most countries, is near the end of compulsory schooling. The objective of PISA is to measure the “yield” of education systems, or what skills and competencies students have acquired and can apply in these subjects to real-world contexts by age 15. The literacy concept emphasizes the mastery of processes, understanding of concepts, and application of knowledge and functioning in various situations within domains. By focusing on literacy, PISA draws not only from school curricula but also from learning that may occur outside of school.”

Historically relevant is the fact that conception of PISA at the Freudenthal Institute in the Netherlands included heavy guidance from Thomas Romberg of the University of Wisconsin’s WCER and the original creator of the middle school math ed curriculum MiC, Mathematics in Context. Its underlying philosophy is exactly that of PISA, the study of mathematics through everyday applications that do not require the development of the more sophisticated mathematics that opens the doors for deeper study in mathematics; i.e., all mildly sophisticated math-based career opportunities, so-called STEM careers. In point of fact, the arithmetic of the PISA applications is calculator-friendly so even elementary arithmetic through ordinary fractions – so necessary for eventual algebra – need not be developed to score well.


[2] (Table 3, page 23)
[5] [Distribution of Advanced Mathematics Achievement]

Wayne Bishop, PhD
Professor of Mathematics, Emeritus
California State University, LA

Posted in Education journalism, Education policy, Education Reform, information suppression, K-12, Mathematics, OECD, Testing/Assessment, Uncategorized, Wayne Bishop | Tagged , , , | Leave a comment

A New Core

The Concord Review
December 2, 2016

Dinosaur scholars like Mark Bauerlein argue that the decline in the humanities in our universities is caused by their retreat from their own best works—literature departments no longer celebrate great literature, history departments no longer offer great works of history to students to read, and so on.

However, an exciting new article by Nicholas Lemann in The Review from The Chronicle of Higher Education, while it shares some concerns about the decline of the humanities, proposes an ingenious modern new Core, which would…

“put methods above subject-matter knowledge in the highest place of honor, and they treat the way material is taught as subsidiary to what is taught…”

In this new design, what is taught is methods, not knowledge—of history, literature, languages, philosophy and all that…

Here is a list of the courses Professor Lemann recommends:

Information Acquisition
Cause and Effect
The Language of Form
Thinking in Time

And he says that: “What these courses have in common is a primary commitment to teaching the rigorous (and also properly humble) pursuit of knowledge.”

At last we can understand that the purpose of higher education in the humanities should be the pursuit of knowledge, and not actually to catch up with any of it. We may thus enjoy a new generation of mentally “fleet-footed” ignoramuses who have skipped the greatness of the humanities in the chase for methods and skills of various kinds. This approach is as hollow and harmful as it was in the 1980s, when Harvard College tried to design a knowledge-free, methods-filled Core Curriculum, so it seems that what comes around does indeed come around, but still students are neither learning from or enjoying the greatness of the humanities in college much these days…


“Teach with Examples”
Will Fitzhugh [Founder]
The Concord Review [1987]
Ralph Waldo Emerson Prizes [1995]
National Writing Board [1998]
TCR Academic Coaches [2014]
730 Boston Post Road, Suite 24
Sudbury, Massachusetts 01776-3371 USA
Varsity Academics®

Posted in Common Core, Education Reform, Higher Education, Humanities, Will Fitzhugh | Tagged , , , | Leave a comment

Yes, President Trump can do something about Common Core

For starters, he can shut down the federal funding of organizations that have supplied the misinformation that begat and continues to propagandize Common Core. While the Gates Foundation gets the most attention, government-funded entities play their part. For example, our nation could be much improved if relieved of the burden of fuzzy research produced at the Center for Research on Educational Standards and Student Testing (CRESST), the Board on Testing and Assessment (BOTA) at the National Research Council, and K-12 programs in the Education and Human Resources (EHR) Division of the National Science Foundation. All have been captured by education’s vested interests, and primarily serve them.

Posted in Common Core, Education policy, K-12, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , , , , | 1 Comment

Among the Constructivists

The online journal Aeon posted (6 October, 2016) The Examined Life, by John Taylor, director of Learning, Teaching and Innovation at Cranleigh boarding school in Surrey (U.K.).

Taylor advocates “independent learning” in describing his “ideal classroom”:

“The atmosphere in the class is relaxed, collaborative, enquiring; learning is driven by curiosity and personal interest. The teacher offers no answers but instead records comments on a flip-chart as the class discusses. Nor does the lesson end with an answer. In fact it doesn’t end when the bell goes: the students are still arguing on the way out.”

As for what he sees as the currently dominant alternative:

“Students are working harder than ever to pass tests but schools allow no time for true learning in the Socratic tradition.”

“Far from being open spaces for free enquiry, the classroom of today resembles a military training ground, where students are drilled to produce perfect answers to potential examination questions.”

…You get the drift.

A bit sarcastically, I write in the Comments section

“So, the ideal class is the one in which the teacher does the least amount of work possible. How nice …for the teacher.”

To my surprise, other readers respond. I find the responses interesting. (Numbers of “Likes” current as of 9 October, 2016.)

Richard Phelps
So, the ideal class is the one in which the teacher does the least amount of work possible. How nice …for the teacher.     Like 0

Dan Fouts
If only it were like that! The ideal classroom described in this article would be led by a teacher who does a very different kind of work– coaching others to think rather than dictating everything–Being patient with confusion rather than rushing to answers– Discarding pre-determined outcomes and instead promoting outcomes that reveal themselves within lessons. This is very difficult, time-consuming teacher work.     Like 2

Richard Phelps
One purpose for tests is as an indicator to parents and taxpayers that their children are learning something. How would you convince parents and taxpayers that students have “learned how to think”? I presume that there is no test for that, and that you might not want to use it even if it existed, as that could induce “teaching to the test”. So, what would you tell them?     Like 1

Dan Fouts
Great point and questions. Therein lies the challenge. Since thinking itself is a mental process, it eludes empirical measurement in a very real way. We are in an education system that places value on things only if students can show they can DO something (this is the behaviorist model) and only if what they do is measurable using the language of mathematics. Standardized tests are wonderful models to use once we have embraced these assumptions. Cultivating independent thinking isn’t really on the radar.

Though I tell them that writing assessments or projects (as referenced in the article) are better vehicles to demonstrate independent thinking.     Like 2

John Taylor
I would agree with you Dan. Project work has the advantage that it is conducted over a period of time, during which a range of skills can be exhibited, and, typically, the teacher can form a better judgement of the student’s capacity for thinking their way through a problem. Exams, being a snapshot, are limited in this regard and the assessment of factual recall tends to be to the fore, as opposed to capacities for reflection, questioning of assumptions, exploration of creative new options, and so on. I think too that we could make more use of the viva; in my experience, asking a student to talk for a few minutes is an excellent way of gauging the depth of their understanding     Like 1

Ian Wardell
Teaching people the ability to think is more important than passing tests. What is important is the ability of people to think for themselves and to attain understanding. Not to simply unthinkingly churn out what others have said.     Like 0

Richard P. Phelps
Again, how do you measure that? How can a parent or taxpayer know that their children are better off for having gone to school? How do you prove or demonstrate that a child is now better able to think than before?     Like 1

Amritt Flora
Richard, therein lies the dilemma – the need for people to measure rather than believe. If we stopped being obsessed with measuring and categorising so deeply everything we do, we would be in a better position. You should only need to talk to a child to know that they have learned to think. Maybe we don’t have time to do that.     Like 0

Brian Fraser
I would ask them to read “An Atom or a Nucleus?” It takes the position that the thing that has virtually all the mass of the atom, and which accounts for all the properties of the atom, is actually the atom itself, not some sort of “nucleus” of something. This goes contrary to what we have been taught for the past 100 years.

This is supposedly “hard science” physics. But it raises deeply disturbing questions about Pavlovian style education.

The link is (Take the test at the end of the article)

If we are wrong about the atom “having” a nucleus, we could be wrong about A LOT of things, even in the “objective sciences”.     Like 0

Young Thug
I think most parents want what is best for their children. I don’t think anyone wants their child to be a little robot who can take a test but not navigate through life and all its challenges. And if they do, that’s just sad. It should be noted that the author did not say we should do away with examinations. In fact, they said this kind of class increases performance on examinations, and I have first hand experience with that since I teach a class after school, on a volunteer basis, that also uses a discussion format. Our program has also helped improve test scores among students that took it (and this in a lower income neighborhood) and we have data to prove it. So the results will show, I have confidence in that.

But there is an easy way parents can know what their kids are learning in school. They can just talk to them. And these kids actually want to be in my classroom. One time, I was going to cancel class because my co-facilitator did not show up and she had all the materials. But the kids, and this is, let me remind you, AFTER school, came trailing into an empty classroom with their chairs and started setting up. I told them they had the day off, they could go play. They kept on setting up and said they wanted to have the class anyway and since I was there I might as well do it. This kids wanted to be there. These are regular kids by the way, chosen at random by the after school program. They wanted to be in that class because we have great discussions. These discussions are not random though, the questions are carefully chosen based on a curriculum that has been scientifically validated, and we guide the discussion along to make sure it goes somewhere productive. We don’t take a fully Socratic approach, we have a mixed teaching and discussion style. The classes are about an hour and a half long. And I’ve had parents come up to me many times and thank me personally because they have seen their children change after taking my program. So if kids are interested and engaged in school, they will talk to and tell you about it if you ask. Because they are interested, and kids, like all people, like speaking about things they are interested in.     Like 2

Ian Wardell
Nice for the teacher, nice for the children, nice for society as a whole that we are educating people to think for themselves.     Like 0

Richard P. Phelps
“we are educating people to think for themselves” How do you know you are? How do you measure it?     Like 0

Nicola Williams
This type of teaching takes a great deal of preparation, and I would say it is actually far more challenging for a teacher to guide and direct students towards answers and valuable discussion than to spout out the answers themselves. The teacher who looks like they are doing very little, and manages to guide students to a point where they have learnt something, is an outstanding teacher – they pull the strings, and the students are guided into finding the answers themselves: students feel fantastic because they did it ‘on their own’, and, because they did the legwork instead of writing down an answer they were told, it sticks in their mind for much longer.     Like 1

Digital Diogenes Aus
Teaching to the test is easy.
Sure, its stressful and a lot of work, but it’s a lot of grunt work.
Teaching in the Socratic fashion is hard- you actually have to know what you’re talking about, you have to know your kids, and you have to consistently stay ahead of the curve     Like 1

Posted in Education policy, K-12, Richard P. Phelps, Testing/Assessment | Tagged , , , , , , , , , , , | Leave a comment

More Common Core salespersons’ salaries

In a previous post, I summarized recent Form 990s—the financial reporting documents required of large US non-profits by the Internal Revenue Service—filed by three organizations. The Thomas B. Fordham Institute, the Alliance for Excellent Education, and the National Center on Education and the Economy were and are paid handsomely to promote the Common Core Standards and affiliated programs.

Here, I review Form 990s for three more Common Core-connected organizations—Achieve, The Council of Chief State School Officers (CCSSO), and PARCC, Inc.

PARCC, the acronym for Partnership for Assessment of Readiness for College and Careers, represents one of two Common Core-affiliated testing consortia. I attempted to find Form 990s for the other testing consortium, Smarter-Balanced, but failed. They would appear to be very well hidden, inside the labyrinthine accounting structure of either the University of California-Los Angeles (UCLA) or the University of California system.

The most recently available documents online for each organization included below emanate from either the 2013 or 2014 tax and calendar year. According to Achieve’s filing, it spun off PARCC, Inc. as “an unrelated entity” exactly midway through 2014.

Now for the salaries…

Achieve, Inc.
Achieve2013 – Achieve claimed four program activities for the year, all associated with “college and career ready initiatives”. Six employees, including President Michael Cohen and Senior Math Associate Laura Slover, received financial compensation in excess of $200,000, and twenty in excess of $100,000. Another $195,000 went to Common Core Standards writer Sue Pimentel living up in New Hampshire, as “consultant”. Public Opinion Strategies received over $175,000 for “research”. “Council of State Science Supervisor” “consultants” collectively absorbed half a million.

Oddly, Achieve listed zero expenses for “lobbying” and “advertising and promotion”. Instead, it categorized almost $5 million under “Other professional fees”. Almost a million each was spent on travel and “conferences, conventions, and meetings.”

Council of Chief State School Officers
CCSSO2014 – CCSSO received over $2.5 million in member dues, primarily from states paying for places at the table for their state chief education officers. Not many years ago, these dues, plus whatever surplus income it kept from annual meeting registrations, paid its rent and salaries.

In 2014, however, “contracts, grants, & sponsorships” income exceeded $31 million, twelve times the amount from dues and meetings. CCSSO in its current form could easily survive a loss of member dues payments; it could not survive a loss of contracts and grants—read Common Core promotion payments. The tail now wags the dog.

Twenty-six CCSSO staffers received salaries in excess of $100,000 annually. At least another six took home more than $200,000. The CEO, Chris Minnich, got more than a quarter million. Over half a million was claimed for “lobbying to influence a legislative body (direct lobbying)”, but $0 as “lobbying to influence public opinion (grass roots lobbying).” Yet, at another juncture, a “grassroots nontaxable amount” of $250,000 is declared.

CCSSO spent over $8 million on travel in 2014, more than on salaries and wages.

So much money flows through CCSSO that it earned almost a quarter million dollars from investments alone in 2014.

PARCC2014 – According to Achieve, PARCC, Inc. began life on July 1, 2014. Nonetheless, its top officers seem to have earned healthy annual salaries: seven in excess of $100,000 and two in excess of $200,000. Laura Slover, last seen above as Senior Math Associate at Achieve in 2013, became CEO of PARCC, Inc. in 2014, with over a quarter million in salary. PARCC spent $1.242 million on travel in 2014.

PARCC’s revenue consisted of $66 million in government grants, and $0.6 million from everywhere else. PARCC’s expenses comprised $34.8 million to NCS Pearson and $6.4 million to ETS for test development, and $1.3 million to Rupert Murdoch’s and Joel Klein’s Amplify Education and $0.8 million to Breakthrough Technologies for IT work.

Posted in Common Core, Education policy, Education Reform, Richard P. Phelps, Testing/Assessment | Leave a comment

Does Common Core add up for California’s math students?*

As this public school year begins, districts across California are reporting student performance on new exams based on California’s adaptation of the controversial Common Core federal standards. Students and parents have good reason to be anxious about the newly released scores now and for years to come.

The first thing we are told by state officials is that the exams are based on “more rigorous Common Core academic standards.” In many states, the remark would be correct. But in California, especially in mathematics, the exact opposite is true.

California and Massachusetts had the best state standards in the country and we have both lost them along with the excellent CSTs (California Standards Tests) and each school’s API (Academic Progress Indicator). The API’s two 1-10 scores were based on the school’s CSTs — collective student performance — against all California schools and also against 100 comparable schools. Although simplistic, these were amazingly effective. They were incomparably better than the new color-coded “scores” that interested observers will not understand, probably by design.

There is a widely held misconception that multiple-choice tests are misinforming because it is “easy for students to guess answers.” This fact ignores the reality that all students are in the same boat, with strong students having a better opportunity to demonstrate what they know.

As described by the officials, the new test requires students to answer follow-up questions and perform a task that shows their research and problem-solving skills. Nice as this sounds, reality is that it makes the mathematics tests far more verbal. Any student with weak reading and writing skills is unfairly assessed. That is especially problematic for English learners.

Low socio-economic Latino kids will be further burdened in demonstrating their mathematics competence, and Chinese or Korean immigrants who are a couple of years ahead mathematically (as was my daughter-in-law when she immigrated as a fifth-grader from Korea) will be told their mathematics competence is deficient. Absolutely absurd. Mathematics carried her for a couple of years until her English became good enough for academic work in other subjects.

The Common Core math standards, and the misguided philosophy of mathematics education behind them, are the heart of the problem. The new assessments simply reflect them. They say mathematics is best learned through students’ exploration of lengthy “real world” problems rather than the artificial setting of a competent teacher teaching a concept followed by straightforward applications thereof.

Stephen Colbert reports on Common Core confusion

Reality is that traditional (albeit contrived) word problems lead to better retention and use of the mathematics involved. Comparison with the highly effective Singapore Primary Math Series is illustrative.

Another misconception of teachers and assessment “experts” is that Common Core expects students to use nonstandard arithmetic algorithms. These are often used in place of the familiar ones; e.g., borrow/carry in subtraction/addition and vertical multiplication with its place-value shift with successive digits. Stephen Colbert’s delightful derision, which you can find by googling Colbert and Common Core, provides an example of that parental frustration.

Hard as it is to believe, one of the top three guides for the national math standards, and the sole guide for California’s new exams from the Smarter Balanced Assessment Consortium, has no degree in mathematics; his degree is in English literature.

Moreover, both the corresponding curricula and these less meaningful assessments are exactly what the Math Wars of the 1990s were about.The former standards that came out in late 1997 were written by a subgroup of the Stanford mathematics faculty and were based on the goal of making eighth-grade algebra a realistic opportunity for all California students, not just those whose parents can afford a good private school.

The idea that the Common Core standards and associated assessments are more rigorous and provide greater opportunities for California students is based on ignorance or, worse, is completely disingenuous.

It makes the mathematics tests far more verbal. Any student with weak reading and writing skills is unfairly assessed. That is especially problematic for English learners.


Wayne Bishop is a professor of mathematics at Cal State Los Angeles.

*Originally published in the San Gabriel Valley [Los Angeles] Tribune, 2 September, 2016

Posted in Common Core, Education Fraud, Education policy, Education Reform, K-12, Testing/Assessment, Wayne Bishop | Tagged , , , , , , , , , | 1 Comment

John Hopkins flawed report on Kentucky

It looks like a recent, very problematic report from Johns Hopkins University, “For All Kids, How Kentucky is Closing the High School Graduation Gap for Low-Income Students,” is likely to get pushed well beyond the Bluegrass State’s borders.

The publishers just announced a webinar on this report for August 30th.

Anyway, you need to get up to speed on why this report is build on a foundation of sand. You can do that fairly quickly by checking these blogs:

News release: The uneven quality of Kentucky’s high school diplomas

More on the quality control problems with Kentucky’s high school diplomas – Part 1

A third blog will release at 8 am Eastern tomorrow. It will probably link at

More on the quality control problems with Kentucky’s high school diplomas – Part 2

I won’t know for sure until it releases, however.

Let me know if you have questions and especially if this Hopkins report starts making the rounds in your state.

Posted in College prep, Common Core, Education journalism, Education policy, Education Reform, K-12, research ethics, Richard Innes | Tagged , , , , , | Leave a comment

101 Terms for Denigrating Others’ Research

In scholarly terms, a review of the literature or literature review is a summation of the previous research conducted on a particular topic. With a dismissive literature review, a researcher assures the public that no one has yet studied a topic or that very little has been done on it. Dismissive reviews can be accurate, for example with genuinely new scientific discoveries or technical inventions. But, often, and perhaps usually, they are not.

A recent article in the Nonpartisan Education Review includes hundreds of statements—dismissive reviews—of some prominent education policy researchers.* Most of their statements are inaccurate; perhaps all of them are misleading.

“Dismissive review”, however, is the general term. In the “type” column of the files linked to the article, a finer distinction is made among simply “dismissive”—meaning a claim that there is no or little previous research, “denigrating”—meaning a claim that previous research exists but is so inferior it is not worth even citing, and “firstness”—a claim to be the first in the history of the world to ever conduct such a study. Of course, not citing previous work has profound advantages, not least of which is freeing up the substantial amount of time that a proper literature review requires.

By way of illustrating the alacrity with which some researchers dismiss others’ research as not worth looking for, I list the many terms marshaled for the “denigration” effort in the table below. I suspect that in many cases, the dismissive researcher has not even bothered to look for previous research on the topic at hand, outside his or her small circle of colleagues.

Regardless, the effect of the dismissal, particularly when coming from a highly influential researcher, is to discourage searches for others’ work, and thus draw more attention to the dismisser. One might say that “the beauty” of a dismissive review is that rival researchers are not cited, referenced, or even identified, thus precluding the possibility of a time-consuming and potentially embarrassing debate.

Just among the bunch of high-profile researchers featured in the Nonpartisan Education Review article, one finds hundreds of denigrating terms employed to discourage the public, press, and policymakers from searching for the work done by others. Some in-context examples:

  • “The shortcomings of [earlier] studies make it difficult to determine…”
  • “What we don’t know: what is the net effect on student achievement?
    -Weak research designs, weaker data
    -Some evidence of inconsistent, modest effects
    Reason: grossly inadequate research and evaluation”
  • “Nearly 20 years later, the debate … remains much the same, consisting primarily of opinion and speculation…. A lack of solid empirical research has allowed the controversy to continue unchecked by evidence or experience…”

To consolidate the mass of verbiage somewhat, I group similar terms in the table below.

(Frequency)   Denigrating terms used for other research
(43)   [not] ‘systematic’; ‘aligned’; ‘detailed’; ‘comprehensive’; ‘large-scale’; ‘cross-state’; ‘sustained’; ‘thorough’
(31)    [not] ‘empirical’; ‘research-based’; ‘scholarly’
(29)   ‘limited’; ‘selective’; ‘oblique’; ‘mixed’; ‘unexplored’
(19)   ‘small’; ‘scant’; ‘sparse’; ‘narrow’; ‘scarce’; ‘thin’; ‘lack of’; ‘handful’; ‘little’; ‘meager’; ‘small set’; ‘narrow focus’
(15)   [not] ‘hard’; ‘solid’; ‘strong’; ‘serious’; ‘definitive’; ‘explicit’; ‘precise’
(14)   ‘weak’; ‘weaker’; ‘challenged’; ‘crude’; ‘flawed’; ‘futile’
(9)    ‘anecdotal’; ‘theoretical’; ‘journalistic’; ‘assumptions’; ‘guesswork’; ‘opinion’; ‘speculation’; ‘biased’; ‘exaggerated’
(8)    [not] ‘rigorous’
(8)    [not] ‘credible’; ‘compelling’; ‘adequate’; ‘reliable’; ‘convincing’; ‘consensus’; ‘verified’
(7)    ‘inadequate’; ‘poor’; ‘shortcomings’; ‘naïve’; ‘major deficiencies’; ‘futile’; ‘minimal standards of evidence’
(5)    [not] ‘careful’; ‘consistent’; ‘reliable’; ‘relevant’; ‘actual’
(4)    [not] ‘clear’; ‘direct’
(4)    [not] ‘high quality’; ‘acceptable quality’; ‘state of the art’
(4)    [not] ‘current’; ‘recent’; ‘up to date’; ‘kept pace’
(4)    ‘statistical shortcomings’; ‘methodological deficiencies’; ‘individual student data, followed school to school’; ‘distorted’
(2)    [not] ‘independent’; ‘diverse’

As well as illustrating the facility with which some researchers denigrate the work of rivals, the table summary also illustrates how easy it is. Hundreds of terms stand ready for dismissing entire research literatures. Moreover, if others’ research must satisfy the hundreds of sometimes-contradictory characteristics used above simply to merit acknowledgement, it is not surprising that so many of the studies undertaken by these influential researchers are touted as the first of a kind.

* Phelps, R.P. (2016). Dismissive reviews in education policy research: A list. Nonpartisan Education Review/Resources/DismissiveList.htm

Posted in Censorship, Education journalism, Education policy, information suppression, research ethics, Richard P. Phelps | Tagged , , , , , , , | Leave a comment

‘One size fits all’ national tests not deeper or more rigorous

Some say that now is a wonderful time to be a psychometrician — a testing and measurement professional. There are jobs aplenty, with high pay and great benefits. Work is available in the private sector at test development firms; in recruiting, hiring, and placement for corporations; in public education agencies at all levels of government; in research and teaching at universities; in consulting; and many other spots.

Moreover, there exist abundant opportunities to work with new, innovative, “cutting edge”, methods, techniques, and technologies. The old, fuddy-duddy, paper-and-pencil tests with their familiar multiple-choice, short-answer, and essay questions are being replaced by new-fangled computer-based, internet-connected tests with graphical interfaces and interactive test item formats.

In educational testing, the Common Core Standards Initiative (CCSI), and its associated tests, developed by the Smarter-Balanced Assessment Consortium (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC), has encouraged the movement toward “21st century assessments”. Much of the torrential rain of funding, burst forth from federal and state governments and clouds of wealthy foundations, has pooled in the pockets of psychometricians.

At the same time, however, the country’s most authoritative psychometricians—the very people who would otherwise have been available to guide, or caution against, the transition to the newer standards and tests—have been co-opted. In some fashion or another, they now work for the CCSI. Some work for the SBAC or PARCC consortia directly, some work for one or more of the many test development firms hired by the consortia, some help the CCSI in other capacities. Likely, they have all signed confidentiality agreements (i.e., “gag orders”).

Psychometricians who once had been very active in online chat rooms or other types of open discussion forums on assessment policy no longer are, unless to proffer canned promotions for the CCSI entities they now work for. They are being paid well. They may be doing work they find new, interesting, and exciting. But, with their loss of independence, society has lost perspective.

Perhaps the easiest vantage point from which to see this loss of perspective is in the decline of adherence to test development quality standards, those that prescribe the behavior of testing and measurement professionals themselves. Over the past decade, for example, the International Test Commission (ITC) alone has developed several sets of standards.

Perhaps the oldest set of test quality standards was established originally by the American Psychological Association (APA) and was updated most recently in 2014—the Standards for Educational and Psychological Testing (AERA, NCME, APA). It contains hundreds of individual standards. The CCSI as a whole, and the SBAC and PARCC tests in particular, fail to meet many of them.

The problem starts with what many professionals consider the testing field’s “prime directive”—Standard 1.0 (AERA, NCME, APA, p.23). It reads as follows:

“Clear articulation of each intended test score interpretation for a specified use should be set forth, and appropriate validity evidence in support of each intended interpretation should be provided.”

That is, a test should be validated for each purpose for which it is intended to be used before it is used for that purpose. Before it is used to make important decisions. And, before it is advertised as serving that purpose.

Just as states were required by the Race to the Top competition for federal funds to accept Common Core standards before they had even been written, CCSI proponents have boasted about their new consortium tests’ wonderful benefits since before test development even began. They claimed unproven qualities about then non-existent tests because most CCSI proponents do not understand testing, or they are paid not to understand.

In two fundamental respects, the PARCC and SBAC tests will never match their boosters’ claims nor meet basic accepted test development standards. First, single tests are promised to measure readiness for too many and too disparate outcomes—college and careers—that is, all possible futures. It is implied that PARCC and SBAC will predict success in art, science, plumbing, nursing, carpentry, politics, law enforcement …any future one might wish for.

This is not how it is done in educational systems that manage multiple career pathways well. There, in Germany, Switzerland, Japan, Korea, and, unfortunately, few jurisdictions in the U.S., a range of different types of tests are administered, each appropriately designed for their target professions. Aspiring plumbers take plumbing tests. Aspiring medical workers take medical tests. And, those who wish to prepare for more advanced degrees might take more general tests that predict their aptitude to succeed in higher education institutions.

But that isn’t all. SBAC and PARCC are said to be aligned to the K-12 Common Core standards, too. That is, they both summarize mastery of past learning and predict future success. One test purports to measure how well students have done in high school, and how well they will do in either the workplace or in college, three distinctly different environments, and two distinctly different time periods.

PARCC and SBAC are being sold as replacements for state high school exit exams, for 4-year college admission tests (e.g., the SAT and ACT), for community college admission tests (e.g., COMPASS and ACCUPLACER), and for vocational aptitude tests (e.g., ASVAB). Problem is, these are very different types of tests. High school exit exams are generally not designed to measure readiness for future activity but, rather, to measure how well students have learned what they were taught in elementary and secondary schools. We have high school exit exams because citizens believe it important for their children to have learned what is taught there. Learning Civics well in high school, for example, may not correlate highly with how well a student does in college or career but many nonetheless consider it important for our republic that its citizens learn the topic

High school exit exams are validated by their alignment with the high school curriculum, or content standards. By contrast, admission or aptitude tests are validated by their correlation with desired future outcomes—grades, persistence, productivity, and the like in college—their predictive validity. In their pure, optimal forms, a high school exit exam, a college admission test, and vocational aptitude tests bear only a slight resemblance to each other. They are different tests because they have different purposes and, consequently, require different validations.


Let’s assume for the moment that the Common Core consortia tests, PARCC and SBAC, can validly measure all that is claimed for them—mastery of the high school curriculum and success in further education and in the workplace. The fact is no evidence has yet been produced that verifies any of these things. And, remember, the proof of, and the claims about, a new test’s virtues are supposed to be provided before the test is used purposefully.

Sure, Common Core proponents claim to have just recently validated their consortia tests for correlation with college outcomes , for alignment with elementary and secondary school content standards, and for technical quality . The clumsy studies they cite do not match the claims made for them, however.

SBAC and PARCC cannot be validated for their purpose of predicting college and career readiness until data are collected in the years to come on the college and career outcomes of those who have taken the tests in high school. The study cited by Common Core proponents uses the words “predictive validity” in its title. Only in the fine print does one discover that, at best, the study measured “concurrent” validity—high school tests were administered to current rising college sophomores and compared to their freshman-year college grades. Calling that “predictive validity” is, frankly, dishonest.

It might seem less of a stretch to validate SBAC and PARCC as high school exit exam replacements. After all, supposedly they are aligned to the Common Core Standards so in any jurisdiction where the Common Core Standards prevail, they would be retrospectively aligned to the high school curriculum. Two issues tarnish this rosy picture. First, the Common Core Standards are subjectively narrow, just mathematics and English Language Arts, with no attention paid to the majority of the high school curriculum.

Second, common adherence to the Common Core Standards across the States has deteriorated to the point of dissolution. As the Common Core consortia’s grip on compliance (i.e., alignment) continues to loosen, states, districts within states, and schools within districts are teaching how they want and what they want. The less aligned Common Core Standards become, the less valid the consortium tests become as measures of past learning.

As for technical quality, the Fordham Institute, which is paid handsomely by the Bill & Melinda Gates Foundation to promote Common Core and its consortia tests, published a report which purports to be an “independent” comparative standards alignment study. Among its several fatal flaws: instead of evaluating tests according to the industry standard Standards for Educational and Psychological Testing, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employed “a brand new methodology” specifically developed for Common Core and its copyright owners, and paid for by Common Core’s funders.

Though Common Core consortia test sales pitches may be the most disingenuous, SAT and ACT spokespersons haven’t been completely forthright either. To those concerned about the inevitable degradation of predictive validity if their tests are truly aligned to the K-12 Common Core standards, public relations staffs assure us that predictive validity is a foremost consideration. To those concerned about the inevitable loss of alignment to the Common Core standards if predictive validity is optimized, they assure complete alignment.

So, all four of the test organizations have been muddling the issue. It is difficult to know what we are going to get with any of the four tests. They are all straddling or avoiding questions about the trade-offs. Indeed, we may end up with four, roughly equivalent, muddling tests, none of which serve any of their intended purposes well.

This is not progress. We should want separate tests, each optimized for a different purpose, be it measuring high school subject mastery, or predicting success in 4-year college, in 2-year college, or in a skilled trade. Instead, we may be getting several one-size-fits-all, watered-down tests that claim to do all but, as a consequence, do nothing well. Instead of a skilled tradesperson’s complete tool set, we may be getting four Swiss army knives with roughly the same features. Instead of exploiting psychometricians’ advanced knowledge and skills to optimize three or more very different types of measurements, we seem to be reducing all of our nationally normed end-of-high-school tests to a common, generic muddle.



McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute.

Nichols-Barrer, I., Place, K., Dillon, E., & Gill, B. (2015, October 5). Final Report: Predictive Validity of MCAS and PARCC: Comparing 10th Grade MCAS Tests to PARCC Integrated Math II, Algebra II, and 10th Grade English Language Arts Tests. Cambridge, MA: Mathematica Policy Research.

Phelps, R.P. (2016, February). Fordham Institute’s pretend research. Policy Brief. Boston: Pioneer Institute.

American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME). (2014). Standards for Educational and Psychological Testing, Washington, DC: AERA.

Posted in College prep, Common Core, Education policy, Education Reform, K-12, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , , , , , , , , , | Leave a comment

Some Common Core Salespersons’ Salaries: DC Edu-Blob-ulants

Linked are copies of Form 990s for Marc Tucker’s National Center for Education and the Economy (NCEE), Checker Finn’s Fordham Foundation and Fordham Institute, and Bob Wise’s Alliance for Excellent Education (AEE). Each pays himself and at least one other well.

All non-profit organizations with revenues exceeding $50,000 must file Form 990s annually with the Internal Revenue Service. And, in return for the non-profits’ tax-exempt status, their Form 990s are publicly available.

As to salaries…

National Center for Education and the Economy
NCEE2013Form990 – Marc Tucker pays himself $501,087, and six others receive from $162k to $379k (p.40 of 48); his son, Joshua Tucker receives $214,813 (p. 42)
…also interesting: p.16 (contrast with p.15), pp. 19, 27, 37

Alliance for Excellent Education
AEE2013Form990 – Bob Wise pays himself $384,325, and six others receive from $162k to $227k. (see p.27 of 36)
…also interesting: p.24 (“Madoff Recovery”)

Thomas B. Fordham Foundation & Institute
FordhamF2013Form990 & FordhamI2013Form990 – With both a “Foundation” and an “Institute”, Checker Finn and Mike Petrilli can each pay themselves about $100k, twice. (see p.25 of 42)
…also interesting: p.19 ($29million in investments; $1.5million for an interest rate swap); p.37 (particularly the two entries for “Common Sense Offshore, Ltd.)

Posted in Common Core, Education policy, Education Reform, Ethics, research ethics | Tagged , , , , , , , , , , , , , , | 1 Comment

Censorship at Education Next

In response to their recent misleading articles about a fall 2015 Mathematica report that claims to (but does not) find predictive validity for the PARCC test with Massachusetts college students, I wrote the text below and submitted it to EdNext as a comment to the article. The publication neither published my comment nor provided any explanation. Indeed, the comments section appears to have vanished entirely.

“First, the report attempts to calculate only general predictive validity. The type of predictive validity that matters is “incremental predictive validity”—the amount of predictive power left over when other predictive factors are controlled. If a readiness test is highly correlated with high school grades or class rank, it provides the college admission counselor no additional information. It adds no value. The real value of the SAT or ACT is in the information it provides admission counselors above and beyond what they already know from other measures available to them.

“Second, the study administered grade 10 MCAS and PARCC tests to college students at the end of their freshmen years in college, and compared those scores to their first-year grades in college. Thus, the study measures what students learned in one year of college and in their last two years of high school more than it measures what they knew as of grade 10. The study does not actually compute predictive validity; it computes “concurrent” validity.

“Third, student test-takers were not representative of Massachusetts tenth graders. All were volunteers; and we do not know how they learned about the study or why they chose to participate. Students not going to college, not going to college in Massachusetts, or not going to these colleges in Massachusetts could not have participated. The top colleges—where the SAT would have been most predictive—were not included in the study (e.g., U. Mass-Amherst, any private college, or elite colleges outside the state). Students not going to college, or attending occupational certificate training programs or apprenticeships–for whom one would suspect the MCAS would be most predictive–were not included in the study.”

Posted in Censorship, College prep, Common Core, Education journalism, Ethics, information suppression, K-12, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , | Leave a comment

Hard Work by Students

In my ten years of HS teaching I saw good (hard-working, interested in learning) students do well with good teachers, and ALSO do pretty well with poor teachers…

I saw poor (not working, not interested in learning) students do poorly with poor teachers and ALSO do poorly with good teachers….

From this I have derived my Brilliant Insight:

The most important variable in student academic achievement is student academic work.” (not teacher quality, although that can make some difference)…

This Insight gains no traction, in spite of its obvious truth, I think, because ED Leaders, Pundits, Planners, Designers, etc., believe they are helpless to increase student interest in doing academic work.

So they Plan, Lead, Design, Critique, and so on, and the issue of student academic work just gets no attention.

This all seems too simple-minded, of course, but an increase in student academic work is guaranteed to improve student academic achievement (in every group) and it is irresponsible to ignore it, however difficult it may be to influence.

Posted in Education policy, Education Reform, research ethics, Will Fitzhugh | Tagged , , , , , , | Leave a comment

The Education Writers Association casts its narrowing gaze on Boston, May 1-3

The Education Writers Association casts its narrowing gaze on Boston, May 1-3

Billions have been spent, and continue to be spent, promoting the Common Core Standards and their associated consortium tests, PARCC and SBAC. Nonetheless, the “Initiative” has been stopped in its tracks largely by a loose coalition of unpaid grassroots activists. That barely-organized amateurs could match the many well-organized, well-paid professional organizations, tells us something about Common Core’s natural appeal, or lack thereof. Absent the injection of huge amounts of money and political mandates, there would be no Common Core.

The Common Core Initiative (CCI) does not progress, but neither does it go away. Its alleged primary benefit—alignment both within and across states (allegedly producing valid cross-state comparisons)—continues to degrade as participating states make changes that suit them. The degree of Common Core adoption varies greatly from state to state, and politicians’ claims about the degree of adoption even more so. CCI is making a mess and will leave a mess behind that will take years to clean up.

How did we arrive in this morass? Many would agree that our policymakers have failed us. Politicians on both sides of the aisle naively believed CCI’s “higher, deeper, tougher, more rigorous” hype without making any effort to verify the assertions. But, I would argue that the corps of national education journalists is just as responsible.

Too many of our country’s most influential journalists accept and repeat verbatim the advertising slogans and talking points of Common Core promoters. Too many of their stories source information from only one side of the issue. Most annoying, to those of us eager for some journalistic balance, has been some journalists’ tendency to rely on Common Core promoters to identify the characteristics and explain the motives of Common Core opponents.

An organization claiming to represent and support all US education journalists sets up shop in Boston next week for its annual “National Seminar”. The Education Writers Association’s (EWA’s) national seminars introduce thousands of journalists to sources of information and expertise. Many sessions feature journalists talking with other journalists. Some sessions host teachers, students, or administrators in “reports from the front lines” type panel discussions. But, the remaining and most ballyhooed sessions feature non-journalist experts on education policy fronting panels with, typically, a journalist or two hosting. Allegedly, these sessions interpret “all the research”, and deliver truth, from the smartest, most enlightened on earth.

Given its central role, and the profession it represents, one would expect diligence from EWA in representing all sides and evidence. Indeed, EWA claims a central purpose “to help journalists get the story right.”

Rummaging around EWA’s web site can be revealing. I located website material classified under their “Common Core” heading: 192 entries overall, including 6 EWA Radio broadcast transcripts, links to 19 research or policy reports, 1 “Story Lab”, 8 descriptions of and links to organizations useful for reporters to know, 5 seminar and 3 webinar agendas, 11 links to reporters’ stories, and 42 links to relevant multimedia presentations.

I was interested to learn the who, what, where, and how of EWA sourcing of education research and policy expertise. In reviewing the mass of material the EWA classifies under Common Core, then, I removed that which was provided by reporters and ignored that which was obviously purely informational, provided it was unbiased (e.g., non-interpretive reporting of poll results, thorough listing of relevant legislative actions). What remains is a formidable mass of material—in the form of reports, testimonies, interviews, essays, seminar and webinar transcripts, and so on.

So, whom does the EWA rely on for education policy expertise “to help journalists get the story right”? Which experts do they invite to their seminars and webinars? Whose reports and essays do they link to? Whose interviews do they link to or post? Remember, journalists are trained to represent all sides to each story, to summarize all the evidence available to the public.

That’s not how it works at the Education Writers Association, however. Over the past several years, EWA has provided speaking and writing platforms for 102 avowed Common Core advocates, 7 avowed Common Core opponents, 12 who are mostly in favor, and one who is mostly opposed.[1] Randomly select an EWA Common Core “expert” from the EWA website, and the odds exceed ten to one the person will be an advocate and, more than likely, a paid promoter.

Included among the 102 Common Core advocates for whom the EWA provided a platform to speak or write, are officials from the “core” Common Core organizations, the Council of Chief State School Officers (CCSSO), the National Governors Association (NGA), the Partnership for Assessment of Readiness for College and Careers (PARCC), and the Smarter-Balanced Assessment Consortium (SBAC). Also included are representatives from research and advocacy organizations paid by the Bill and Melinda Gates Foundation and other funding sources to promote the Common Core Standards and tests: the Thomas P. Fordham Institute, the New America Foundation, the Center for American Progress, the Center on Education Policy, and the Business Roundtable. Moreover, one finds ample representation in EWA venues of organizations directly profiting from PARCC and SBAC test development activity, such as the Center for Assessment, WestEd, the Rand Corporation, and professors from the Universities of North Carolina and Illinois, Harvard and Stanford Universities, UCLA, Michigan State, and Southern Cal (USC).

Most of the small contingent of Common Core opponents does not oppose the Common Core initiative, standards, or tests per se but rather tests in general, or the current quantity of tests. Among the seven attributions to avowed opponents, three are to the National Center for Fair and Open Testing (a.k.a., FairTest), an organization that opposes all meaningful standards and assessments, not just Common Core.

The seven opponents comprise one extreme advocacy group, a lieutenant governor, one local education administrator, an education graduate student, and another advocacy group called Defending the Early years, which argues that the grades K–2 Common Core Standards are age-inappropriate (i.e., too difficult). No think tank analysts. No professors. No celebrities.

Presumably, this configuration of evidence and points of view represents reality as the leaders of EWA see it (or choose to see it): 102 in favor and 7 opposed; several dozen PhDs from the nation’s most prestigious universities and think tanks in favor and 7 fringe elements opposed. Accept this as reality and pro-CCI propaganda characterizations of their opponents might seem reasonable. Those in favor of CCI are prestigious, knowledgeable, trustworthy authorities. Those opposed are narrow minded, self-interested, uninformed, inexpert, or afraid of “higher, deeper, tougher, more rigorous” standards and tests. Those in favor of CCI want progress; those opposed do not.

In a dedicated website section, EWA describes and links to eight organizations purported to be good sources for stories on the Common Core. Among them are the core CCI organizations Achieve, CCSSO, NGA, PARCC, and SBAC; and the paid CC promoters, the Fordham Institute. The only opposing organization suggested? — FairTest.

There remain two of the EWA’s favorite information sources, the American Enterprise Institute (AEI) and the American Federation of Teachers (AFT) that I have categorized as mostly pro-CCI. Both received funding from the Gates Foundation early on to promote the Initiative. When the tide of public opinion began to turn against the Common Core, however, both organizations began shuffling their stance and straddling their initial positions. Each has since adopted the “Common Core is a great idea, but it has been poorly implemented” theme.

So, what of the great multitude who desire genuinely higher standards and consequential tests and recognize that CCI brings neither? …who believe Common Core was never a good idea, never made any sense, and should be completely dismantled? Across several years, categories and types of EWA coverage, one finds barely a trace of representation.

The representation of research and policy expertise at EWA national seminars reflects that at its website. Keynote speakers include major CCI advocates College Board President David Coleman (twice), US Education Secretary Arne Duncan (twice), Secretary John King, Governor Bill Haslam, and “mostly pro” AFT President Randi Weingarten, along with the unsure Governor Charlie Baker. No CCI opponents.

Among other speakers presented as experts in CCI related sessions at the Nashville Seminar two years ago were 14 avowed CCI advocates[2], one of the “mostly pro” variety, and one critic, local education administrator Carol Burris. At least ten of the 14 pro-CCI experts have worked directly in CCI-funded endeavors. Last year’s Chicago Seminar featured nine CCI advocates[3] and one opponent, Robert Schaeffer of FairTest. Five of the nine advocates have worked directly in CCI-funded endeavors.

In addition to Secretary John King’s keynote, this year’s Boston Seminar features a whopping 16 avowed CCI proponents, two of the “mostly pro” persuasion, and one opponent, Linda Hanson, a local area educator and union rep. At least ten of the 16 proponents have worked in CCI-funded activities.

One session entitled “The Massachusetts Story” might have invited some of those responsible for the rise of the Commonwealth from a middling performer twenty years ago to nation’s academic leader ten years ago (some of whom feel rather upset with the Commonwealth’s adoption of Common Core Standards in 2010). Sandy Stotsky, for example, wrote many of the English Language Standards in the 1990s, might be the country’s most prolific writer on CCI issues, and lives in Boston. Instead, EWA invited three after-the-fact regional leaders who promote the CCI.

In general, some of EWA’s most called-upon experts work in think tanks. EWA loves think tanks. While in Chicago, they could have invited scholars affiliated with the Heartland Institute, a staunch opponent of the CCI. But, they didn’t. For the Boston meeting, they could have invited scholars affiliated with the Pioneer Institute (e.g., Sandy Stotsky and R. James Milgram, both of whom served on the CCI’s evaluation committee); Pioneer is arguably the country’s leading source of scholarly opposition to the CCI. But, they haven’t.

Turns out, the only think tanks that matter in EWA’s judgment are national think tanks. Not being located in Washington, DC, Heartland and Pioneer might be considered “regional” think tanks, despite all the effort they put into national issues. Instead of inviting locally-based think tankers opposed to the CCI in Chicago and Boston, EWA preferred to fly CCI think tank advocates out from DC.

For the “reform” side of education issues, in general, EWA invitations appear stuck inside a tight little circle. EWA frequently calls upon Harvard-affiliated folk (e.g., Chingos, Ferguson, Fryer, Hess, Ho, Kane, Long, Loveless, Mehta, Putnam, Reville, Rhee, Sahlberg, Schwartz, West). EWA is also quite fond of anyone who has worked for Chester “Checker” Finn (e.g., Petrilli, Pondiscio, Northern, Smarick, Brickman, and Polikoff).

There are many thousands of education researchers in the world, thousands of higher education institutions, and hundreds of relevant research journals. But the EWA has chosen to rely almost exclusively on an infinitesimal proportion of it for expertise. Ironically, the tiny group on which they depend comprises some of the world’s most poorly read and censorious researchers.[4]

EWA likes the Fordham Institute especially well. Within the past few years, EWA has conferred upon Fordham an EWA best web site award and, to Fordham’s Robert Pondiscio, a National Award for Education Reporting in the “Education Organizations and Experts” category. Fordham and Pondiscio accepted their awards in Nashville.

Several possible explanations for the Education Writers Association expertise sourcing myopia come to mind, such as a lack of resources, convenience, naïveté, passivity (e.g., expecting experts to contact them rather than looking for them), and an irresistible attraction to money and power (e.g., EWA sponsors seem very well represented at EWA venues). But, chief among them, to my observation, are elitism and a wholesale conflation of celebrity for expertise. Far too often, the EWA features “expert” opinion from someone who is well known as a commentator on education policy generally (or, at least, well known generally) but who knows next to nothing about the topic at hand.

At EWA seminars, whether national, regional, or topical, one observes an effort to make good use of local education researchers and university professors, but not just any. There are several universities in Tennessee, but Vanderbilt professors overwhelmed the agenda at EWA’s Nashville meeting. Likewise, there exist many universities in the Chicago area, but EWA preferred to invite those from the University of Chicago and Northwestern, the two most elite. Boston University is hosting next week’s Boston meeting, and several of its academics will be involved in session panels. But, twice as many will come from Harvard.

In a variety of ways, the Education Writers Association functions to centralize expertise sourcing. If there were no EWA, the thousands of education journalists who attend their seminars would initiate all their expertise sourcing on their own. The result, in the absence of EWA’s suggestions, would be a much wider variety of expertise sourced. And, the US populace would much better informed.

The EWA is run by education journalists with national ambitions. Through efforts such as the EWA Seminars, the national group imposes its bias toward Washington, DC power and celebrity on its thousands of members. As a result, it serves not as muckraker or spokespersons for the less powerful, but largely to boost the public relations push of the wealthy established interests.

Could all this just be sour grapes? After all, right there on its web site EWA offers in large, bold letters “Opportunities for Exposure”. If one is dissatisfied with the status quo, why not take them up on their offer? The body of the text reads “Sponsorship, Exhibition, & Advertising Available Now!”. Oh, right, that’s why.



[1] Not counting the few sources delivering neutral information, nor the “reports from the front lines” panels of teachers and school administrators (most of whom, at EWA meetings, appear to support the CCI).

[2] Michael Cohen (Achieve), Terry Holiday (Commonwealth of Kentucky), Jamie Woodson (TN SCORE), Dennis Van Roekel (NEA), Amber Northern (Fordham Institute), William Schmidt (Michigan State U), Sandra Alberti (Student Achievement Partners), Jacqueline King (SBAC), Laura Slover (PARCC), Tommy Bice (State of Alabama), Kristen DiCerbo (Pearson Inc.), Kevin Huffman (TN DOE), Lisa Guernsey (New America Foundation), and Robert Pondiscio (Education Next, Fordham Institute)

[3] Morgan Polikoff (USC, Fordham), Andy Isaacs (Everyday Math, U. Chicago), Dana Cartier (IL Center for School Improvement), Diane Briars (NCTM), Matt Chingos (Brookings), Scott Marion (Center for Assessment), Chris Minnich (CCSSO), James Pellegrino (U. Illinois-Chicago), and Andrew Latham (WestEd).



Posted in Common Core, Education journalism, Education policy, Education Reform, Education Writers Association, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , , , , , | Leave a comment


It seems that some Massachusetts representatives don’t think that parents, teachers, and administrators should be allowed to vote on a secret ballot whether they want to keep Common Core’s inferior standards or return to the state’s superior standards junked by its state board of education in July 2010. Why does this state representative think that it is better for Bay State schools to address standards written in 2009 in Washington, DC, by unqualified people, funded chiefly by the Bill and Melinda Gates Foundation? Here is a State House News reporter’s April 26 account of how some Beacon Hill legislators think about the ballot question to end Common Core in Massachusetts.

By Andy Metzger
STATE HOUSE, BOSTON, APRIL 26, 2016…..At odds over the future of charter schools in Massachusetts, the co-chairwomen of the Education Committee may be more closely aligned on a proposal to revert state curriculum standards to their prior iteration.

The proposal to restore education standards in place before Massachusetts adopted Common Core in 2010 appears headed for the ballot, as does a citizens initiative to increase charter school enrollment.
Rep. Alice Peisch, a Wellesley Democrat and House chairwoman of the Education Committee, said the Common Core repeal would be a mistake. Her co-chairwoman on the committee, Sen. Sonia Chang-Diaz said she is disinclined to vote for the proposal, but hasn’t yet staked out a position.
“If that ballot question were to pass, that is six years of work that will be irrelevant,” Peisch told members of local school committees on Tuesday. She said, “I think it would be a huge mistake for a ballot question to determine what students learn.”

Sandra Stotsky, who helped draft the old “first class” Massachusetts standards, told the News Service pulling the Bay State out of Common Core would stop the “damage” caused by the multi-state standard.

“The ballot question says let’s go back to the standards we know worked,” said Stotsky, who said Common Core includes “nonsense statements.”

The referendum would reverse a move by the Board of Elementary and Secondary Education taken in 2010 and restore the prior frameworks and establish new processes for developing curriculum frameworks.

Peisch said the state’s teachers “have all been trained in the new standards” and the state is going out to bid for a new assessment – dubbed MCAS 2.0 – “that will be aligned with the standards.”
A former senior associate commissioner in the state’s education department, Stotsky said the Common Core standards are “unteachable.”

“They’re unteachable in that they require skills that kids don’t have and that teachers can’t easily teach,” Stotsky told the News Service.

Speaking at the Tuesday event organized by the Massachusetts Association of School Committees, Chang-Diaz, a Jamaica Plain Democrat and co-chairwoman of the Education Committee, had a more measured take on the proposal.

“For folks who are worried about losing self-determination as a state over our own curriculum frameworks, there’s nothing about the Common Core that prevents us from doing that,” Chang-Diaz said. She said she has “trouble understanding” the “content-based objection” to Common Core.

Chang-Diaz told the News Service she wanted to read the question before forming a conclusion.

“I haven’t read it yet, so I think I will not be voting for that ballot question, but I’m a stickler for reading things before I state a final position,” Chang-Diaz said. She said, “I don’t have to vote on that for a while.”

The Common Core repeal referendum (H 3929) is currently before the Education Committee, which had a hearing on it in March. Without action by the Legislature before May 4 – which appears unlikely – supporters of the move away from Common Core could collect another 10,792 signatures around the state to place the matter before voters.

Posted in Common Core, Education policy, Education Reform, Ethics, K-12, Mathematics, Reading & Writing, Sandra Stotsky, Testing/Assessment | Tagged , , , | 1 Comment

Fordham Institute’s pretend research

The Thomas B. Fordham Institute has released a report, Evaluating the Content and Quality of Next Generation Assessments,[i] ostensibly an evaluative comparison of four testing programs, the Common Core-derived SBAC and PARCC, ACT’s Aspire, and the Commonwealth of Massachusetts’ MCAS.[ii] Of course, anyone familiar with Fordham’s past work knew beforehand which tests would win.

This latest Fordham Institute Common Core apologia is not so much research as a caricature of it.

  1. Instead of referencing a wide range of relevant research, Fordham references only friends from inside their echo chamber and others paid by the Common Core’s wealthy benefactors. But, they imply that they have covered a relevant and adequately wide range of sources.
  2. Instead of evaluating tests according to the industry standard Standards for Educational and Psychological Testing, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employ “a brand new methodology” specifically developed for Common Core, for the owners of the Common Core, and paid for by Common Core’s funders.
  3. Instead of suggesting as fact only that which has been rigorously evaluated and accepted as fact by skeptics, the authors continue the practice of Common Core salespeople of attributing benefits to their tests for which no evidence exists
  4. Instead of addressing any of the many sincere, profound critiques of their work, as confident and responsible researchers would do, the Fordham authors tell their critics to go away—“If you don’t care for the standards…you should probably ignore this study” (p. 4).
  5. Instead of writing in neutral language as real researchers do, the authors adopt the practice of coloring their language as so many Common Core salespeople do, attaching nice-sounding adjectives and adverbs to what serves their interest, and bad-sounding words to what does not.

1.  Common Core’s primary private financier, the Bill & Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the Core and its associated testing programs.[iii] A cursory search through the Gates Foundation web site reveals $3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or “general operating support.”[iv] Gates awarded an additional $653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio ($2,596,812), reputedly the nation’s worst.[v]

The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:

  • the Human Resources Research Organization (HumRRO),[vi]
  • the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the test evaluation “Criteria.”[vii]
  • the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of one of the federally-subsidized Common Core-aligned testing programs, the Smarter-Balanced Assessment Consortium (SBAC),[viii] and
  • Student Achievement Partners, the organization that claims to have inspired the Common Core standards[ix]

The Common Core’s grandees have always only hired their own well-subsidized grantees for evaluations of their products. The Buros Center for Testing at the University of Nebraska has conducted test reviews for decades, publishing many of them in its annual Mental Measurements Yearbook for the entire world to see, and critique. Indeed, Buros exists to conduct test reviews, and retains hundreds of the world’s brightest and most independent psychometricians on its reviewer roster. Why did Common Core’s funders not hire genuine professionals from Buros to evaluate PARCC and SBAC? The non-psychometricians at the Fordham Institute would seem a vastly inferior substitute, …that is, had the purpose genuinely been an objective evaluation.

2.  A second reason Fordham’s intentions are suspect rests with their choice of evaluation criteria. The “bible” of North American testing experts is the Standards for Educational and Psychological Testing, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.[x]

Had Fordham compared the tests using the Standards for Educational and Psychological Testing (or any of a number of other widely-respected test evaluation standards, guidelines, or protocols[xi]) SBAC and PARCC would have flunked. They have yet to accumulate some the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest they will fail on all three counts.[xii]

Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-owns the Common Core standards and co-sponsored their development (Council of Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond’s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others.[xiii],[xiv] Thus, Fordham compares SBAC and PARCC to other tests according to specifications that were designed for SBAC and PARCC.[xv]

The authors write “The quality and credibility of an evaluation of this type rests largely on the expertise and judgment of the individuals serving on the review panels” (p.12). A scan of the names of everyone in decision-making roles, however, reveals that Fordham relied on those they have hired before and whose decisions they could safely predict. Regardless, given the evaluation criteria employed, the outcome was foreordained regardless whom they hired to review, not unlike a rigged election in a dictatorship where voters’ decisions are restricted to already-chosen candidates.

Still, PARCC and SBAC might have flunked even if Fordham had compared tests using all 24+ of CCSSO’s “Criteria.” But Fordham chose to compare on only 14 of the criteria.[xvi] And those just happened to be criteria mostly favoring PARCC and SBAC.

Without exception the Fordham study avoided all the evaluation criteria in the categories:

“Meet overall assessment goals and ensure technical quality”,

“Yield valuable reports on student progress and performance”,

“Adhere to best practices in test administration”, and

“State specific criteria”[xvii]

What types of test characteristics can be found in these neglected categories? Test security, providing timely data to inform instruction, validity, reliability, score comparability across years, transparency of test design, requiring involvement of each state’s K-12 educators and institutions of higher education, and more. Other characteristics often claimed for PARCC and SBAC, without evidence, cannot even be found in the CCSSO criteria (e.g., internationally benchmarked, backward mapping from higher education standards, fairness).

The report does not evaluate the “quality” of tests, as its title suggests; at best it is an alignment study. And, naturally, one would expect the Common Core consortium tests to be more aligned to the Common Core than other tests. The only evaluative criteria used from the CCSSO’s Criteria are in the two categories “Align to Standards—English Language Arts” and “Align to Standards—Mathematics” and, even then, only for grades 5 and 8.

Nonetheless, the authors claim, “The methodology used in this study is highly comprehensive” (p. 74).

The authors of the Pioneer Institute’s report How PARCC’s false rigor stunts the academic growth of all students,[xviii] recommended strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also did not recommend continuing with the current MCAS, which is also based on Common Core’s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state’s mandated tests to a more responsible agency.

Perhaps the primary conceit of Common Core proponents is that the familiar multiple-choice/short answer/essay standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)[xix]. Ironically, it is they—opponents of traditional testing content and formats—who propose that standardized tests measure everything. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.

Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:

”Research: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.”[xx]

Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure “sustained research” in the one- or two-minute span of a standardized test question? In PARCC tests, this is simulated by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.

But, that is not how research works. It is hardly the type of deliberation that comes to most people’s mind when they think about “sustained research”. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; “sustained research” should be measured more authentically.

The authors of the aforementioned Pioneer Institute report recommend, as their 7th policy recommendation for Massachusetts:

“Establish a junior/senior-year interdisciplinary research paper requirement as part of the state’s graduation requirements—to be assessed at the local level following state guidelines—to prepare all students for authentic college writing.”[xxi]

PARCC, SBAC, and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two. It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC and SBAC tests “deeper” than others. In practice, the alleged deeper parts are the most convoluted and superficial.

Appendix A of the source document for the CCSSO criteria provides three international examples of “high-quality assessments” in Singapore, Australia, and England.[xxiii] None are standardized test components. Rather, all are projects developed over extended periods of time—weeks or months—as part of regular course requirements.

Common Core proponents scoured the globe to locate “international benchmark” examples of the type of convoluted (i.e., “higher”, “deeper”) test questions included in PARCC and SBAC tests. They found none.

3.  The authors continue the Common Core sales tendency of attributing benefits to their tests for which no evidence exists. For example, the Fordham report claims that SBAC and PARCC will:

“make traditional ‘test prep’ ineffective” (p. 8)

“allow students of all abilities, including both at-risk and high-achieving youngsters, to demonstrate what they know and can do” (p. 8)

produce “test scores that more accurately predict students’ readiness for entry-level coursework or training” (p. 11)

“reliably measure the essential skills and knowledge needed … to achieve college and career readiness by the end of high school” (p. 11)

“…accurately measure student progress toward college and career readiness; and provide valid data to inform teaching and learning.” (p. 3)

eliminate the problem of “students … forced to waste time and money on remedial coursework.” (p. 73)

help “educators [who] need and deserve good tests that honor their hard work and give useful feedback, which enables them to improve their craft and boost their students’ success.” (p. 73)

The Fordham Institute has not a shred of evidence to support any of these grandiose claims. They share more in common with carnival fortune telling than empirical research. Granted, most of the statements refer to future outcomes, which cannot be known with certainty. But, that just affirms how irresponsible it is to make such claims absent any evidence.

Furthermore, in most cases, past experience would suggest just the opposite of what Fordham asserts. Test prep is more, not less, likely to be effective with SBAC and PARCC tests because the test item formats are complex (or, convoluted), introducing more “construct irrelevant variance”—that is, students will get lower scores for not managing to figure out formats or computer operations issues, even if they know the subject matter of the test. Disadvantaged and at-risk students tend to be the most disadvantaged by complex formatting and new technology.

As for Common Core, SBAC, and PARCC eliminating the “problem of” college remedial courses, such will be done by simply cancelling remedial courses, whether or not they might be needed, and lowering college entry-course standards to the level of current remedial courses.

4.  When not dismissing or denigrating SBAC and PARCC critiques, the Fordham report evades them, even suggesting that critics should not read it: “If you don’t care for the standards…you should probably ignore this study” (p. 4).

Yet, cynically, in the very first paragraph the authors invoke the name of Sandy Stotsky, one of their most prominent adversaries, and a scholar of curriculum and instruction so widely respected she could easily have gotten wealthy had she chosen to succumb to the financial temptation of the Common Core’s profligacy as so many others have. Stotsky authored the Fordham Institute’s “very first study” in 1997, apparently. Presumably, the authors of this report drop her name to suggest that they are broad-minded. (It might also suggest that they are now willing to publish anything for a price.)

Tellingly, one will find Stotsky’s name nowhere after the first paragraph. None of her (or anyone else’s) many devastating critiques of the Common Core tests is either mentioned or referenced. Genuine research does not hide or dismiss its critiques; it addresses them.

Ironically, the authors write, “A discussion of [test] qualities, and the types of trade-offs involved in obtaining them, are precisely the kinds of conversations that merit honest debate.” Indeed.

5.  Instead of writing in neutral language as real researchers do, the authors adopt the habit of coloring their language as Common Core salespeople do. They attach nice-sounding adjectives and adverbs to what they like, and bad-sounding words to what they don’t.

For PARCC and SBAC one reads:

“strong content, quality, and rigor”

“stronger tests, which encourage better, broader, richer instruction”

“tests that focus on the essential skills and give clear signals”

“major improvements over the previous generation of state tests”

“complex skills they are assessing.”

“high-quality assessment”

“high-quality assessments”

“high-quality tests”

“high-quality test items”

“high quality and provide meaningful information”

“carefully-crafted tests”

“these tests are tougher”

“more rigorous tests that challenge students more than they have been challenged in the past”

For other tests one reads:

“low-quality assessments poorly aligned with the standards”

“will undermine the content messages of the standards”

“a best-in-class state assessment, the 2014 MCAS, does not measure many of the important competencies that are part of today’s college and career readiness standards”

“have generally focused on low-level skills”

“have given students and parents false signals about the readiness of their children for postsecondary education and the workforce”

Appraising its own work, Fordham writes:

“groundbreaking evaluation”

“meticulously assembled panels”

“highly qualified yet impartial reviewers”

Considering those who have adopted SBAC or PARCC, Fordham writes:

“thankfully, states have taken courageous steps”

“states’ adoption of college and career readiness standards has been a bold step in the right direction.”

“adopting and sticking with high-quality assessments requires courage.”


A few other points bear mentioning. The Fordham Institute was granted access to operational SBAC and PARCC test items. Over the course of a few months in 2015, the Pioneer Institute, a strong critic of Common Core, PARCC, and SBAC, appealed for similar access to PARCC items. The convoluted run-around responses from PARCC officials excelled at bureaucratic stonewalling. Despite numerous requests, Pioneer never received access.

The Fordham report claims that PARCC and SBAC are governed by “member states”, whereas ACT Aspire is owned by a private organization. Actually, the Common Core Standards are owned by two private, unelected organizations, the Council of Chief State School Officers and the National Governors’ Association, and only each state’s chief school officer sits on PARCC and SBAC panels. Individual states actually have far more say-so if they adopt ACT Aspire (or their own test) than if they adopt PARCC or SBAC. A state adopts ACT Aspire under the terms of a negotiated, time-limited contract. By contrast, a state or, rather, its chief state school officer, has but one vote among many around the tables at PARCC and SBAC. With ACT Aspire, a state controls the terms of the relationship. With SBAC and PARCC, it does not.[xxiv]

Just so you know, on page 71, Fordham recommends that states eliminate any tests that are not aligned to the Common Core Standards, in the interest of efficiency, supposedly.

In closing, it is only fair to mention the good news in the Fordham report. It promises on page 8, “We at Fordham don’t plan to stay in the test-evaluation business”.


[i] Nancy Doorey & Morgan Polikoff. (2016, February). Evaluating the content and quality of next generation assessments. With a Foreword by Amber M. Northern & Michael J. Petrilli. Washington, DC: Thomas P. Fordham Institute.

[ii] PARCC is the Partnership for Assessment of Readiness for College and Careers; SBAC is the Smarter-Balanced Assessment Consortium; MCAS is the Massachusetts Comprehensive Assessment System; ACT Aspire is not an acronym (though, originally ACT stood for American College Test).

[iii] The reason for inventing a Fordham Institute when a Fordham Foundation already existed may have had something to do with taxes, but it also allows Chester Finn, Jr. and Michael Petrilli to each pay themselves two six figure salaries instead of just one.


[v] See, for example, ; ; ;

[vi] HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.

[vii] CCSSO has received 23 grants from the Bill & Melinda Gates Foundation from “2009 and earlier” to 2016 collectively exceeding $100 million.


[ix] Student Achievement Partners has received four grants from the Bill & Melinda Gates Foundation from 2012 to 2015 exceeding $13 million.

[x] The authors write that the standards they use are “based on” the real Standards. But, that is like saying that Cheez Whiz is based on cheese. Some real cheese might be mixed in there, but it’s not the product’s most distinguishing ingredient.

[xi] (e.g., the International Test Commission’s (ITC) Guidelines for Test Use; the ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores; the ITC Guidelines on the Security of Tests, Examinations, and Other Assessments; the ITC’s International Guidelines on Computer-Based and Internet-Delivered Testing; the European Federation of Psychologists’ Association (EFPA) Test Review Model; the Standards of the Joint Committee on Testing Practices)

[xii] Despite all the adjectives and adverbs implying newness to PARCC and SBAC as “Next Generation Assessment”, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly “higher-order”, more “authentic”, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.

See, for example, For California: Michael W. Kirst & Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, Phi Delta Kappan, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. Education Reporter, 141. See also Klein, David. (2003). “A Brief History Of American K-12 Mathematics Education In the 20th Century”, In James M. Royer, (Ed.), Mathematical Cognition, (pp. 175–226). Charlotte, NC: Information Age Publishing. For Kentucky: ACT. (1993). “A study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky’s performance-based assessment results and ACT-tested Kentucky graduates of 1992”. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent’s point of view. Louisville, KY: Author. ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky’s first experiment. For Maryland: P. H. Hamp, & C. B. Summers. (2002, Fall). “Education.” In P. H. Hamp & C. B. Summers (Eds.), A guide to the issues 2002–2003. Maryland Public Policy Institute, Rockville, MD. ; Montgomery County Public Schools. (2002, Feb. 11). “Joint Teachers/Principals Letter Questions MSPAP”, Public Announcement, Rockville, MD. ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author.

[xiii] Criteria for High Quality Assessments 03242014.pdf

[xiv] A rationale is offered for why they had to develop a brand new set of test evaluation criteria (p. 13). Fordham claims that new criteria were needed, which weighted some criteria more than others. But, weights could easily be applied to any criteria, including the tried-and-true, preexisting ones.

[xv] For an extended critique of the CCSSO Criteria employed in the Fordham report, see “Appendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments” in Mark McQuillan, Richard P. Phelps, & Sandra Stotsky. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute, pp. 62-68.

[xvi] Doorey & Polikoff, p. 14.

[xvii] MCAS bests PARCC and SBAC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.

[xviii] McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute.

[xix] It is perhaps the most enlightening paradox that, among Common Core proponents’ profuse expulsion of superlative adjectives and adverbs advertising their “innovative”, “next generation” research results, the words “deeper” and “higher” mean the same thing.

[xx] The document asserts, “The Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.” Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). Criteria for high-quality assessment. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7.

[xxi] McQuillan, Phelps, & Stotsky, p. 46.

[xxiii] Linda Darling-Hammond, et al., pp. 16-18.

[xxiv] For an in-depth discussion of these governance issues, see Peter Wood’s excellent Introduction to Drilling Through the Core,

Posted in College prep, Common Core, Education policy, Education Reform, Ethics, K-12, Mathematics, Reading & Writing, research ethics, Richard P. Phelps, Testing/Assessment, Uncategorized | Tagged , , , , , , , , , , , , , | 3 Comments

How the USED has managed to get it wrong, again

An interesting dilemma. Common Core’s writers planned for a grade 11 test that would tell us whether or not students were college and career ready. Parents and state legislators don’t know who sets the cut score, what test items are on it, and what exactly a passing score on a college readiness test means, academically. Yet, all those who pass and enroll in a post-secondary educational institution are entitled to credit-bearing coursework in their freshman year.

So, why should most students wanting to go to a public college take a college admissions test, such as the ACT or SAT? No need to waste time and money for another and unnecessary test that is also “aligned to” Common Core, we are told.

But, that means the SAT and ACT companies lose a lot of money. So, what does the USED do to try to make sure they don’t lose money? It tells states that instead of a Common Core-based test in grade 11, they can require the SAT or ACT for all students for “federal accountability.” Almost a dozen states have fallen for this idiotic idea.

It turns out that an increasing number of colleges are no longer requiring SAT or ACT scores. Why? Among other reasons, the tests can no longer tell them much about success at post-secondary institutions where all students are entitled to credit-bearing courses in their freshman year if they pass a grade 11 Common Core-based test —and can’t be given a placement test to determine remediation level. Some public college presidents or administrators in the state have already agreed to that on the state’s application for Race To The Top (RTTT) funds. Since then, more have. God help the freshman course instructor who doesn’t pass students who were declared college-ready to begin with.

Nor can the tests tell the colleges whether or not the students know much about whatever they studied in K-12. Why? The tests were developed to serve as college admissions tests to predict success in college, not as high school achievement tests. According to some math teachers, they contain material (some Algebra II, trigonometry items) that students haven’t been taught in a Common Core-based curriculum and don’t assess everything important that has been taught.

Worse yet, USED seems to want states to eliminate all other tests—the non-Common Core-based tests possibly including teacher-made tests (on the grounds of getting rid of excessive testing)—and to make passing a grade 11 college and career ready test all that is required for a high school diploma (the requirements might include course titles whose content is presumably addressed by Common Core standards, such as English, Algebra I, and Geometry). Almost everyone will have to be passed or there will be an uproar from the parents of low-achieving students. (Their writing is no longer required by the SAT.)

States adopted Common Core because they believed it would be the silver bullet that made all students college and career ready. If they also believe that all students declared college and career ready are thereby qualified to take credit-bearing coursework in post-secondary education, how can they not give a high school diploma to anyone who passes the grade 11 test? Even if they don’t know what’s on it, who set the cut score and determined who should pass, and what passing the test really means academically. The SAT and ACT are private companies and are not obligated to release any information they don’t want to release.

Who cares if all or most kids don’t want to go to college? Who cares what’s on the tests given in grade 11? All that matters is that the state has met what is required for federal accountability and will get ESSA funds and other money to give its K-12 schools, while it taxes those who can still afford to pay taxes for the increasing costs for less and less teaching and learning. Graduate schools may not care since they will be able to find enough tuition-paying qualified students from other countries.

Posted in College prep, Common Core, Education policy, ESSA, K-12, Reading & Writing, Sandra Stotsky, Testing/Assessment | Tagged , , , , , , , , | 1 Comment