The Testing Charade: Pretending to Make Schools Better, by Daniel Koretz [book review]
Reviewed by Richard P. Phelps
The Testing Charade: Pretending to Make Schools Better, by Daniel Koretz [book review]
Reviewed by Richard P. Phelps
by Jerome Dancis
This summer, I obtained the college remediation data for my state of Maryland. Well just 2014, the latest available. So BCC i.e. before Common Core became the state tests in Maryland.
Does anyone know of similar data for other states?
Fewer Students Learning Arithmetic and Algebra
Analysis based on data by Maryland Higher Education Commission’s (MHEC) Student Outcome and Achievement Report (SOAR).
The data for my state of Maryland (MD) is: (This data may be typical for many of the 45 states, which adapted the NCTM Standards.)
Decline in Percent of Freshmen Entering Colleges in Maryland, Who Knew Arithmetic and Real High School Algebra I.
1998 2005 2006 2014
Whites 67% 60% 58% 64%
African-Americans 44% 33% 36% 37%
Hispanics 56% 42% 43% 44%
See my [Univ. of Maryland] Faculty Voice article,
scroll down to bottom of Page 1
Caveat. This data describes only those graduates of Maryland high schools in 1998, 2005, 2006 and 2014, who entered a college in Maryland the same year.
Related Data. From 1998 to 2005, the number of white graduates increased by 11% (from 14,473 to 16,127), but the number who knew arithmetic and high school algebra I decreased (from 9703 to 9619) (as determined by college placement tests).
Similarly, from 1998 to 2005, the number of African-American graduates who were minimally ready for college Math went down in spite of increased college enrollments of females by 21% and males by 31%.
One of the likely causes for the downturn: High school Algebra I used to be the Algebra course colleges expected. Under the specter of the MD School Assessments (MSAs) and High School Assessments (HSAs), school administrators have been bending the instructional programs out of shape in order to teach to the state tests. The MSAs on math and the MD Voluntary Math Curriculum marginalizes Arithmetic, thereby not allocating sufficient time for too many students to learn Arithmetic. Arithmetic lessons were largely Arithmetic with calculator. The MD HSA on Algebra was Algebra with graphing calculator. The MD HSA on Algebra avoided the arithmetic and arithmetic-based Algebra students would need in college, such as knowing that 3x + 2x = 5x and knowing 9×8 = 72. I nick-named it The MD HSA on “Pretend Algebra” .
Say what you will about Achieve, PARCC, Fordham, CCSSO, and NGA— some of the organizations responsible for promoting the Common Core Initiative on us all. But, their financial records are publicly available.
Not so for some other organizations responsible for the same Common Core promotion. The Smarter Balanced Assessment Consortium (SBAC) and the Center for Research on Educational Standards and Student Testing (CRESST) have absorbed many millions of taxpayer and foundation dollars over the years. But, their financial records have been hidden inside the vast, nebulous cocoon of the University of California – Los Angeles (UCLA). UCLA’s financial records, of course, are publicly available, but amounts there are aggregated at a level that subsumes thousands of separate, individual entities.
UCLA is a tax-supported state institution, however, and California has an open records law on the books. After some digging, I located the UCLA office responsible for records requests and wrote to them. Following is a summary of our correspondence to date:
July 5, 2017
I hope that you can help me. I have spent a considerable amount of time clicking around in search of financial reports for the Smarter Balanced Assessment Consortium (SBAC) and the National Center for Research on Evaluation, Standards, and Student Testing (CRESST), both “housed” at UCLA (or, until just recently in SBAC’s case). Even after many hours of web searching, I still have no clue as to where these data might be found.
Both organizations are largely publicly funded through federal grants. I would like to obtain revenue and expenditure detail on the order of what a citizen would expect to see in a nonprofit organization’s Form 990. I would be happy to search through a larger data base that contains relevant financial details for all of UCLA, so long as the details for SBAC and CRESST are contained within and separately labeled.
I would like annual records spanning the lifetimes of each organization: SBAC only goes back several years, but CRESST goes back to the 1980s (in its early years, it was called the Center for the Study of Evaluation).
Please tell me what I need to do next.
Thank you for your time and attention.
Best Wishes, Richard Phelps
July 6, 2017
RE: Acknowledgement of Public Records Request – PRR # 17-4854
Dear Mr. Phelps:
This letter is to acknowledge your request under the California Public Records Act (CPRA) dated July 5, 2017, herein enclosed. Information Practices (IP) is notifying the appropriate UCLA offices of your request and will identify, review, and release all responsive documents in accordance with relevant law and University policy.
Under the CPRA, Cal. Gov’t Code Section 6253(b), UCLA may charge for reproduction costs and/or programming services. If the cost is anticipated to be greater than $50.00 or the amount you authorized in your original request, we will contact you to confirm your continued interest in receiving the records and your agreement to pay the charges. Payment is due prior to the release of the records.
As required under Cal. Gov’t Code Section 6253, UCLA will respond to your request no later than the close of business on July 14, 2017. Please note, though, that Section 6253 only requires a public agency to make a determination within 10 days as to whether or not a request is seeking records that are publicly disclosable and, if so, to provide the estimated date that the records will be made available. There is no requirement for a public agency to actually supply the records within 10 days of receiving a request, unless the requested records are readily available. Still, UCLA prides itself on always providing all publicly disclosable records in as timely a manner as possible.
Should you have any questions, please contact me at (310) 794-8741 or via email at firstname.lastname@example.org and reference the PRR number found above in the subject line.
Assistant Manager, Information Practices
July 14, 2017
RE: Public Records Request – PRR # 17-4854
Dear Mr. Phelps:
The purpose of this letter is to confirm that UCLA Information Practices (IP) continues to work on your public records request dated July 5, 2017. As allowed pursuant to Cal. Gov’t Code Section 6253(c), we require additional time to respond to your request, due to the following circumstance(s):
The need to search for and collect the requested records from field facilities or other establishments that are separate from the office processing the request.
IP will respond to your request no later than the close of business on July 28, 2017 with an estimated date that responsive documents will be made available.
Should you have any questions, please contact me at (310) 794-8741 or via email at email@example.com and reference the PRR number found above in the subject line.
Assistant Manager, Information Practices
July 28, 2017
Dear Mr. Phelps,
Please know UCLA Information Practices continues to work on your public records request, attached for your reference. I will provide a further response regarding your request no later than August 18, 2017.
Should you have any questions, please contact me at (310) 794-8741 or via email and reference the PRR number found above in the subject line.
UCLA Information Practices
July 29, 2017
Thank you. RP
August 18, 2017
Re: Public Records Request – PRR # 17-4854
Dear Mr. Richard Phelps:
UCLA Information Practices (IP) continues to work on your public records request dated July 5, 2017. As required under Cal. Gov’t Code Section 6253, and as noted in our email communication with you on July 28, 2017, we are now able to provide you with the estimated date that responsive documents will be made available to you, which is September 29, 2017.
As the records are still being compiled and/or reviewed, we are not able at this time to provide you with any potential costs, so that information will be furnished in a subsequent communication as soon as it is known.
Should you have any questions, please contact me at (310) 794-8741 or via email at firstname.lastname@example.org and reference the PRR number found above in the subject line.
Assistant Manager, Information Practices
September 29, 2017
Dear Mr. Richard Phelps,
Unfortunately, we must revise the estimated availability date regarding your attached request as the requisite review has not yet been completed. We expect to provide a complete response by November 30, 2017. We apologize for the delay.
Should you have any questions, please contact our office at (310) 794-8741 or via email, and reference the PRR number found above in the subject line.
UCLA Information Practices
September 29, 2017
I believe that if you are leaving it up to CRESST and SBAC to voluntarily provide the information, they will not be ready Nov. 30 either. RP
…at the Independent Voter Network website, https://IVN.US .
New in the Nonpartisan Education Review:
Cognitive Science and the Common Core Mathematics Standards
by Eric A. Nelson
Between 1995 and 2010, most U.S. states adopted K–12 math standards which discouraged memorization of math facts and procedures. Since 2010, most states have revised standards to align with the K–12 Common Core Mathematics Standards (CCMS). The CCMS do not ask students to memorize facts and procedures for some key topics and delay work with memorized fundamentals in others.
Recent research in cognitive science has found that the brain has only minimal ability to reason with knowledge that has not previously been well-memorized. This science predicts that students taught under math standards that discouraged initial memorization for math topics will have significant difficulty solving numeric problems in mathematics, science, and engineering. As one test of this prediction, in a recent OECD assessment of numeracy skills among 22 developed-world nations, U.S. 16–24 year olds ranked dead last. Discussion will include steps that can be taken to align K–12 state standards with practices supported by cognitive research.
My comments below in response to the USED request for comments on existing USED regulations. To submit your own, follow the instructions at: https://www.regulations.gov/document?D=ED-2017-OS-0074-0001
To: Hilary Malawer, Assistant General Counsel, Office of the General Counsel, U.S. Department of Education
From: Richard P. Phelps
Date: July 8, 2017
Re: Evaluation of Existing Regulations
I encourage the US Education Department to eliminate from any current and future funding education research centers. Ostensibly, federally funded education research centers fill a “need” for more research to guide public policy on important topics. But, the research centers are almost entirely unregulated, so they can do whatever they please. And, what they please is too often the promotion of their own careers and the suppression or denigration of competing ideas and evidence.
Federal funding of education research centers concentrates far too much power in too few hands. And, that power is nearly unassailable. One USED funded research center, the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) blatantly and repeatedly misrepresented research I had conducted while at the U.S. Government Accountability Office (GAO) in favor of their own small studies on the same topic. I was even denied attendance at public meetings where my research was misrepresented. Promises to correct the record were made, but not kept.
When I appealed to the USED project manager, he replied that he had nothing to say about “editorial” matters. In other words, a federally funded education research center can write and say anything that pleases, or benefits, the individuals inside.
Capturing a federally funded research center contract tends to boost the professional provenance of the winners stratospherically. In the case of CRESST, the principals assumed control of the National Research Council’s Board on Testing and Assessment, where they behaved typically—citing themselves and those who agree with them, and ignoring, or demonizing, the majority of the research that contradicted their work and policy recommendations.
Further, CRESST principals now seem to have undue influence on the assessment research of the international agency, the Organisation for Economic Co-operation and Development (OECD), which, as if on cue, has published studies that promote the minority of the research sympathetic to CRESST doctrine while simply ignoring even the existence of the majority of the research that is not. The rot—the deliberate suppression of the majority of the relevant research–has spread worldwide, and the USED funded it.
In summary, the behavior of the several USED funded research centers I have followed over the years meet or exceed the following thresholds identified in the President’s Executive Order 13777:
(ii) Are outdated, unnecessary, or ineffective;
(iii) Impose costs that exceed benefits;
(iv) Create a serious inconsistency or otherwise interfere with regulatory reform initiatives and policies;
(v) Are inconsistent with the requirements of section 515 of the Treasury and General Government Appropriations Act, 2001 (44 U.S.C. 3516 note), or the guidance issued pursuant to that provision, in particular those regulations that rely in whole or in part on data, information, or methods that are not publicly available or that are insufficiently transparent to meet the standard for reproducibility.
Below, I cite only relevant documents that I wrote myself, so as not to implicate anyone else. As the research center principals gain power, fewer and fewer of their professional compatriots are willing to disagree with them. The more power they amass, the more difficult it becomes for contrary evidence and points of view, no matter how compelling or true, to even get a hearing.
Phelps, R. P. (2015, July). The Gauntlet: Think tanks and federally funded centers misrepresent and suppress other education research. New Educational Foundations, 4. http://www.newfoundations.com/NEFpubs/NEF4Announce.html
Phelps, R. P. (2014, October). Review of Synergies for Better Learning: An International Perspective on Evaluation and Assessment (OECD, 2013), Assessment in Education: Principles, Policies, & Practices. doi:10.1080/0969594X.2014.921091 http://www.tandfonline.com/doi/full/10.1080/0969594X.2014.921091#.VTKEA2aKJz1
Phelps, R. P. (2013, February 12). What Happened at the OECD? Education News.
Phelps, R. P. (2013, January 28). OECD Encourages World to Adopt Failed US Ed Programs. Education News.
Phelps, R. P. (2013). The rot spreads worldwide: The OECD – Taken in and taking sides. New Educational Foundations, 2(1). Preview: http://www.newfoundations.com/NEFpubs/NEFv2Announce.html
Phelps, R. P. (2012, June). Dismissive reviews: Academe’s Memory Hole. Academic Questions, 25(2), pp. 228–241. doi:10.1007/s12129-012-9289-4 https://www.nas.org/articles/dismissive_reviews_academes_memory_hole
Phelps, R. P. (2012). The effect of testing on student achievement, 1910–2010. International Journal of Testing, 12(1), 21–43. http://www.tandfonline.com/doi/abs/10.1080/15305058.2011.602920
Phelps, R. P. (2010, July). The source of Lake Wobegon [updated]. Nonpartisan Education Review / Articles, 1(2). http://nonpartisaneducation.org/Review/Articles/v6n3.htm
Phelps, R. P. (2000, December). High stakes: Testing for tracking, promotion, and graduation, Book review, Educational and Psychological Measurement, 60(6), 992–999. http://richardphelps.net/HighStakesReview.pdf
Phelps, R. P. (1999, April). Education establishment bias? A look at the National Research Council’s critique of test utility studies. The Industrial-Organizational Psychologist, 36(4) 37–49. https://www.siop.org/TIP/backissues/Tipapr99/4Phelps.aspx
 In accordance with Executive Order 13777, “Enforcing the Regulatory Reform Agenda,” the Department of Education (Department) is seeking input on regulations that may be appropriate for repeal, replacement, or modification.
Here in DC, the nation’s capital, which has enjoyed Home Rule since 1974,
but remains ultimately under the thumb of Congress and the President (thanks
to Art. I, Section 8 of the Constitution), one never knows what surprise
awaits each new day. These days, one need not be addicted to social media
or even tea leaves to hypothesize what’s around the corner. Something as
politically innocuous as an “Alley Restoration” could be a harbinger of
things to come.
A few weeks ago, as I turned the corner into my alley, I was struck by signs
announcing an “Alley Restoration” and a change in our calendar. Not since
Pope Gregory XIII or even Julius Caesar …… (except for the brief
anticlerical calendar of the French Revolution). Don’t blame Marion Barry;
he has passed to his reward.
With the testing opt-out movement growing in popularity in 2016, Common Core’s profiteers began to worry. Lower participation enough and the entire enterprise could be threatened: with meaningless aggregate scores; compromised test statistics vital to quality control; and a strong signal that many citizens no longer believe the Common Core sales pitch.
The Educational Testing Service (ETS) was established decades ago by the Carnegie Foundation to serve as an apolitical research laboratory for psychometric work. For a while, ETS played that role well, producing some of the world’s highest-quality, most objective measurement research.
In fits and starts over the past quarter century, however, ETS has commercialized. At this point, there should be no doubt in anyone’s mind that ETS is a business–a business that relies on contracts and a business that aims to please those who can pay for its services.
Some would argue, with some justification, that ETS had no choice but to change with the times. Formerly guaranteed contracts were no longer guaranteed, and the organization needed either to pay its researchers or let them go.
Instead of now presenting itself honestly to the public as a commercial enterprise seeking profits, however, ETS continues to prominently display the trappings of a neutral research laboratory seeking truths. Top employees are awarded lofty academic titles and research “chairs”. Whether the awards derive from good research work or success in courting new business is open to question.
I perceive that ETS at least attempts something like an even split between valid research and faux-research pandering. The awarding of ETS’s most prestigious honor bestowed upon outsiders–the Angoff Award–for example, takes turns between psychometricians conducting high-quality, non-political technical work one year, and high-profile gatekeepers conducting highly suspicious research the next. Members of the latter group can be found participating in, or awarding, ETS commercial contracts.
With their “research” on the Common Core test opt-out movement, ETS blew away any credible pretense that it conducts objective research where its profits are threatened. Opt-out leaders are portrayed by ETS as simple-minded, misinformed, parents of poor students, …you name it. And, of course, they are protesting against “innovative, rigorous, high quality” tests they are too dumb to appreciate.
Common Core testing, in case you didn’t know and haven’t guessed from that written above, represents a substantial share of ETS’s work. Pearson holds the largest share of work for the PARCC exams, but ETS holds the second largest.
The most ethical way for ETS to have handled the issue of Common Core opt-outs would have been to say nothing. After all, it is, supposedly, a research laboratory of apolitical test developers. They are statistical experts at developing assessment instruments, not at citizen movements, education administration, or public behavior.
Choosing to disregard the most ethical choice, ETS could have at least made it abundantly clear that it retains a large self-interest in the success of PARCC testing.
Instead, ETS continues to wrap itself in its old research laboratory coat and condemns opt-out movement leaders and sympathizers as ignorant and ill-motivated. Never mind that the opt-out leaders receive not a dime for their efforts, and ETS’s celebrity researchers are remunerated abundantly for communicating the company line.
Four months ago, I responded to one of these ETS anti-opt-out propaganda pieces, written by Randy E. Bennett, the “Norman O. Frederiksen Chair in Assessment Innovation at Educational Testing Service.” It took a few weeks, but ETS, in the person of Mr. Bennett, responded to my comment questioning ETS’s objectivity in the matter.
He asserted, “There’s a lot less organizationally orchestrated propaganda, and a lot more academic freedom, here than you might think!”
To which I replied, “The many psychometricians working at ETS with a starkly different vision of what constitutes “high quality” in assessment are allowed to publish purely technical pieces. But, IMHO, the PR road show predominantly panders to power and profit. ETS’s former reputation for scholarly integrity took decades to accumulate. To my observation, it seems to be taking less time to dissemble. RP”
My return comment, however, was blocked. All comments have now been removed from the relevant ETS web page. All comments remain available to read at the Disqus comments manager site, though. The vertical orange bar next to the Nonpartisan Education logo is Disqus’ indication that the comment was blocked by ETS at its web site.
On a day when we remember Martin Luther King, I want to share a personal perspective on his advocacy of non-violence. When the wisdom of a great person is invoked, omission of the context that gave it meaning demeans the person and distorts his/her message.
The origin of this reflection:
Shortly after 911, the teacher of the “Alternatives to Violence” class at Washington, DC’s Wilson HS, a DC Public School, invited a number of speakers opposed to the US military response to share their views with students and interested teachers.
Some also criticized the SAT as “racist,” “since poor black children shouldn’t be expected to know vocabulary words like ‘yacht’ they don’t hear at home.”*
Several spoke about the “AIDS conspiracy,” but were curiously silent about South African President Thabo Mbeki’s pseudo-scientific theories of its origins, the basis of his opposition to preventive health measures. No mention was made of the protests that Mbeki’s measures provoked.
One speaker, who had attended the recent UN World Conference Against Racism in Durban, South Africa, decried the fact that English was one of the conference’s eight “privileged” official languages. He was clearly oblivious to the fact that the Soweto uprising, which reignited the movement that culminated in the end of apartheid, began when Soweto high school students demanded the right to be taught in English, the language of Martin Luther King and Malcolm X, whose speeches on smuggled tapes were the target of the apartheid government’s thinly disguised censorship plan of making Afrikaans the language of instruction. (In 1979, when I was at Cardozo HS, with the backing of the Washington Teachers’ Union and its president, William Simons, I helped to organize a speaking tour of DC high schools for Soweto student leader Tsietsi Mashinini).
Speakers made frequent references to Martin Luther King Jr and his advocacy of non-violence, attempting to tie it to each of these issues. A few months later, on the occasion of his birthday, students and teachers were invited to a speak-out on non-violence and the war in Afghanistan (this was a year before the Iraq invasion). I gave the following talk.
– – – – – – – – – – –
MARTIN LUTHER KING’S NON-VIOLENCE:
PERSONAL BELIEF or STRATEGY or BOTH?
From a pay phone somewhere in the Negro neighborhood of Selma, Alabama, the scratchy sound of my friend Walter’s insistent voice stirred me. Following the Battle at Selma Bridge and the cowardly murder of Jim Reeb, the ex-minister of DC’s All Souls Unitarian Church, our safe classrooms overlooking the Potomac couldn’t keep him north. His picture had just appeared in Time Magazine, backed against a wall, dodging Sheriff Jim Clark’s trademark white cane, swung from horseback with punishing effect. I could think of no reason to stay away.
It was March 1965.
Two days later, my ’51 Merc was one of several carloads that ended up in Montgomery, Alabama, home of the Confederacy and George Wallace, its current symbol of defiance. We were among the many drawn to the last half dozen or so miles of that great and swelling march, where Martin Luther King, speaking on the capital steps, called upon the U.S. Congress to enact the stalled Voting Rights Bill. The marching, the singing and the exhilaration of a common bond of purpose forged indelible memories that gave life meaning and direction:
– Packed like sardines on the floor of Mr. Ziegler’s modern brick house on a dusty street in the “colored section” that the city fathers saw no need to pave;
– An old woman pressing a few hard earned dimes and nickels into my confused and hesitant hands, blessing me for coming to her city for a day that, for too long, lived only in hope and faith;
– Swaying to the low of “We Shall Overcome,” sung with a spiritual intensity that only long-awaited justice can evoke;
– The lasting images of the long line of marchers (my first of many) winding along the highway into Montgomery, especially noticeable for the religious diversity visible in religious garb: Priests and ministers wearing the Roman collar, nuns in their habits, men wearing the kippah (then more commonly called a yarmulke) and, as seen in the movie, a robed Eastern Orthodox prelate with cross and scepter, and the blue jean overalls favored by many of the young civil rights workers. Along the side of the road, again as in the movie, African-American children and older adults not joining in, but showing their support by smiling and waving at us.
– The prickly sensation of fear, when a jackbooted motorcycle cop, spotting my illegal left turn, pulled me over, New Jersey license plates, unimpeachable evidence of my sin as “outside agitator”:
“You boys comin’ from thuh ralllihh?”
“No, sir; we’re on our way back from Spring Break in New Orleans,”** were the timid words of discretion I heard myself speak.
“Youuu broke thuh law back a ways with that ill-legal left turn, an offense against thuh laws of Mon’gom’ry, Alibammuh. Youuu will folluh me to the cawthouse. Heahhh?”
With barely $20 between us, images of jail cells and the three recently murdered civil rights workers flashed through my mind.
I don’t really remember Martin Luther King’s speech; oh, something about voting rights and the governor’s refusal to protect the marchers. More meaningful than those forgotten words was his gift to me and countless others: A welcome into that great movement for justice and into the arms of humanity and the responsibilities that membership brings.
The power of that movement for justice and his accomplishments are misunderstood, if reduced to an oversimplified advocacy of non-violence. Understanding that it was simultaneously a strategy does not devalue his personal belief. From Thoreau’s writings on non-violent resistance to unjust laws to Gandhi’s practical application in India and the strategy workshops at Highlander Folk School (attended also by Rosa Parks), King’s vision was translated into Alabama reality by union veteran and NAACP leader E.D. Nixon. King’s vision and strategy were grounded on the confluence of evolving global changes and domestic realities that began with Brown vs. Board of Education in 1954, the irreparable fissure in America’s Berlin Wall of legalized segregation.
Like Gandhi, George Washington and even Ho Chi Minh, King understood that those who appeared to benefit from privilege were no monolith. The movement for justice could win support not only from those under the heel of Jim Crow but also from those on the other side of the color line, capable of rejecting a “just us” version of justice.
King also understood another reality that often discomforts those who favor social justice, but not when imposed by the Federal Government: Opponents often yield, not out of moral enlightenment, but when continued resistance seems futile. And, as long as resistance festers, it may reassert itself when it no longer seems futile.***
King understood the power of television. The brutal treatment of fellow Americans peacefully seeking to exercise constitutional rights long guaranteed on paper was witnessed daily in the nation’s living rooms and now became increasingly intolerable. The strategy of non-violence made nation and world witness to the real source of violence.
Then, too, the State Department had run out of red-faced explanations for the rude treatment and crude insults endured by African and Asian diplomats on Maryland’s Route 40 when driving between Washington embassies and UN offices in New York. As America competed with the Soviet Union for world leadership, the message of democracy and freedom increasingly stumbled on the hypocritical contrasts of those embarrassing facts.
Then there was that war in Viet Nam and Martin Luther King’s powerful sermon announcing his public opposition – and break with President Johnson, delivered at New York’s Riverside Cathedral on April 4th, 1967, a year to the day before violence born of hate stole his life.
But wait! Didn’t he receive the Nobel Peace Prize 3 years earlier – in 1964? And didn’t the U.S. troop escalation begin in March 1965, two full years before the Riverside sermon, by which time over 10,000 Americans and tens of thousands Vietnamese had been killed! Two years of public silence!! Where were the public condemnations from the apostle of non-violence? Was he a hypocrite? If so, why not just overlook that flaw whenever the sainted, now forever muted, icon of non-violence can be invoked for the final word!
For King, the commitment to civil rights and economic opportunity compelled him to choose between his personal revulsion against the violence of war and his reluctance to alienate the president who had signed two civil rights bills and funded a war on poverty – as well as that much bigger one in Vietnam. Was the resulting conflict between the non-violence of personal conviction and the strategy of non-violence that won political support against seemingly unmovable odds just another instance of the hypocrisy?
When his advocacy of non-violence is torn from the historical context that gave it life and then reduced to a rigid slogan or dogma, the lessons to be learned from the real human dilemma lose meaning and instructive value.
For that reason, we should treat with caution efforts to invoke his blessing on present-day  controversies:
Would he have condemned the U.S. military response to 9/11?
Would he condemn the World Bank and International Monetary Fund?
Would he politely ignore South Africa President Thabo Mbeki’s pseudo-scientific AIDS fantasies?
Would he condemn SAT tests as racist?
Before rushing to offer a politically convenient answer, we should remember that, as a leader breaking new ground, he took responsibilities upon himself that made rigid adherence to doctrine or philosophy a luxury. Before invoking his blessing for some partisan cause, we should recall how easy it is to summon gods and icons to legitimize both human cruelty and human kindness.
Oh – some stories do end well. The fine for the moving violation on the streets of Montgomery: “City of Montgomery vs. Erich Martel: $3.00,” which, in 1965, was the price of 10 gallons of gas.
For Viola Liuzzo, however, a mother of five from Detroit who volunteered to drive marchers between Selma and Montgomery, a Klansman’s drive-by shotgun blast ended her life, joining her name to the countless many who paid the ultimate price in pursuit of justice.
— Erich Martel [originally written, January 15, 2002]
* Core knowledge advocate E.D. Hirsch has pointed out that the 1960’s Black Panther Party newspaper employed correct grammar and used words like “imperialism,” “capitalism,” etc., assuming that its target audience would know or learn terms and concepts they were unlikely to hear at home.
** In fact, a mere 10 days earlier, a bunch of us had driven to New Orleans for Mardi Gras, which is probably why that came so quickly to mind.
*** We now see that this has come to pass. After the U.S. Supreme Court’s 2013 Shelby County decision weakened the enforcement provisions of the Voting Rights Act, many state legislatures began to enact restrictive voting laws.
A new round of two international comparisons of student mathematics performance came out recently and there was a lot of interest because the reports were almost simultaneous, TIMSS in late November 2016 and PISA just a week later. They are often reported as 2015 instead of 2016 because the data collection for each was in late 2015 that would seem to improve the comparison even more. In fact, no comparison is appropriate; they are completely different instruments and, between them, the TIMSS is the one that should be of more concern to educators. Perhaps surprising and with great room for improvement, the US performance is not as dire as the PISA results would imply. By contrast, Finland continues to demonstrate that its internationally recognized record of PISA-proven success in mathematics education – with its widely applauded, student-friendly approach – is completely misinforming.
In spite of the popular press and mathematics education folklore, Finland’s performance has been known to be overrated since PISA first came out as documented by an open letter written by the president of the Finnish Mathematical Society and cosigned by many mathematicians and experts in other math-based disciplines:
“The PISA survey tells only a partial truth of Finnish children’s mathematical skills” “in fact the mathematical knowledge of new students has declined dramatically”
This letter links to a description of the most fundamental problem that directly involves elementary mathematics education:
“Severe shortcomings in Finnish mathematics skills” “If one does not know how to handle fractions, one is not able to know algebra”
The previous TIMSS had the 4th grade performance of Finland as a bit above that of the US but well behind by 8th. In the new report, it has slipped below the US at 4th and did not even submit itself to be assessed at 8th much less the Advanced level. Similar remarks apply to another country often recognized for its student-friendly mathematics education, the Netherlands, home of the PISA at the Freudenthal Institute. This decline was recognized in the TIMSS summary of student performancewith the comparative grade-level rankings as Exhibits 1.1 and 1.2 with the Advanced as Exhibit M1.1:
By contrast, PISA came out a week later and…
United States 41
Note: These include China* (just below Japan) of 3 provinces, not the country – if omitted, subtract 1.
Why the difference? The problem is that PISA was never for “school mathematics” but for all 15-year-old students in regard to their “mathematics literacy”, not even mathematics at the algebra level needed for non-remedial admission to college much less the TIMSS Advanced level interpreted as AP or IB Calculus in the US:
“PISA is the U.S. source for internationally comparative information on the mathematical and scientific literacy of students in the upper grades at an age that, for most countries, is near the end of compulsory schooling. The objective of PISA is to measure the “yield” of education systems, or what skills and competencies students have acquired and can apply in these subjects to real-world contexts by age 15. The literacy concept emphasizes the mastery of processes, understanding of concepts, and application of knowledge and functioning in various situations within domains. By focusing on literacy, PISA draws not only from school curricula but also from learning that may occur outside of school.”
Historically relevant is the fact that conception of PISA at the Freudenthal Institute in the Netherlands included heavy guidance from Thomas Romberg of the University of Wisconsin’s WCER and the original creator of the middle school math ed curriculum MiC, Mathematics in Context. Its underlying philosophy is exactly that of PISA, the study of mathematics through everyday applications that do not require the development of the more sophisticated mathematics that opens the doors for deeper study in mathematics; i.e., all mildly sophisticated math-based career opportunities, so-called STEM careers. In point of fact, the arithmetic of the PISA applications is calculator-friendly so even elementary arithmetic through ordinary fractions – so necessary for eventual algebra – need not be developed to score well.
 http://nces.ed.gov/pubs2017/2017048.pdf (Table 3, page 23)
 http://timss2015.org/advanced/ [Distribution of Advanced Mathematics Achievement]
Wayne Bishop, PhD
Professor of Mathematics, Emeritus
California State University, LA
The Concord Review
December 2, 2016
Dinosaur scholars like Mark Bauerlein argue that the decline in the humanities in our universities is caused by their retreat from their own best works—literature departments no longer celebrate great literature, history departments no longer offer great works of history to students to read, and so on.
However, an exciting new article by Nicholas Lemann in The Review from The Chronicle of Higher Education, while it shares some concerns about the decline of the humanities, proposes an ingenious modern new Core, which would…
“put methods above subject-matter knowledge in the highest place of honor, and they treat the way material is taught as subsidiary to what is taught…”
In this new design, what is taught is methods, not knowledge—of history, literature, languages, philosophy and all that…
Here is a list of the courses Professor Lemann recommends:
Cause and Effect
The Language of Form
Thinking in Time
And he says that: “What these courses have in common is a primary commitment to teaching the rigorous (and also properly humble) pursuit of knowledge.”
At last we can understand that the purpose of higher education in the humanities should be the pursuit of knowledge, and not actually to catch up with any of it. We may thus enjoy a new generation of mentally “fleet-footed” ignoramuses who have skipped the greatness of the humanities in the chase for methods and skills of various kinds. This approach is as hollow and harmful as it was in the 1980s, when Harvard College tried to design a knowledge-free, methods-filled Core Curriculum, so it seems that what comes around does indeed come around, but still students are neither learning from or enjoying the greatness of the humanities in college much these days…
“Teach with Examples”
Will Fitzhugh [Founder]
The Concord Review 
Ralph Waldo Emerson Prizes 
National Writing Board 
TCR Academic Coaches 
730 Boston Post Road, Suite 24
Sudbury, Massachusetts 01776-3371 USA
For starters, he can shut down the federal funding of organizations that have supplied the misinformation that begat and continues to propagandize Common Core. While the Gates Foundation gets the most attention, government-funded entities play their part. For example, our nation could be much improved if relieved of the burden of fuzzy research produced at the Center for Research on Educational Standards and Student Testing (CRESST), the Board on Testing and Assessment (BOTA) at the National Research Council, and K-12 programs in the Education and Human Resources (EHR) Division of the National Science Foundation. All have been captured by education’s vested interests, and primarily serve them.
The online journal Aeon posted (6 October, 2016) The Examined Life, by John Taylor, director of Learning, Teaching and Innovation at Cranleigh boarding school in Surrey (U.K.).
Taylor advocates “independent learning” in describing his “ideal classroom”:
“The atmosphere in the class is relaxed, collaborative, enquiring; learning is driven by curiosity and personal interest. The teacher offers no answers but instead records comments on a flip-chart as the class discusses. Nor does the lesson end with an answer. In fact it doesn’t end when the bell goes: the students are still arguing on the way out.”
As for what he sees as the currently dominant alternative:
“Students are working harder than ever to pass tests but schools allow no time for true learning in the Socratic tradition.”
“Far from being open spaces for free enquiry, the classroom of today resembles a military training ground, where students are drilled to produce perfect answers to potential examination questions.”
…You get the drift.
A bit sarcastically, I write in the Comments section
“So, the ideal class is the one in which the teacher does the least amount of work possible. How nice …for the teacher.”
To my surprise, other readers respond. I find the responses interesting. (Numbers of “Likes” current as of 9 October, 2016.)
So, the ideal class is the one in which the teacher does the least amount of work possible. How nice …for the teacher. Like 0
If only it were like that! The ideal classroom described in this article would be led by a teacher who does a very different kind of work– coaching others to think rather than dictating everything–Being patient with confusion rather than rushing to answers– Discarding pre-determined outcomes and instead promoting outcomes that reveal themselves within lessons. This is very difficult, time-consuming teacher work. Like 2
One purpose for tests is as an indicator to parents and taxpayers that their children are learning something. How would you convince parents and taxpayers that students have “learned how to think”? I presume that there is no test for that, and that you might not want to use it even if it existed, as that could induce “teaching to the test”. So, what would you tell them? Like 1
Great point and questions. Therein lies the challenge. Since thinking itself is a mental process, it eludes empirical measurement in a very real way. We are in an education system that places value on things only if students can show they can DO something (this is the behaviorist model) and only if what they do is measurable using the language of mathematics. Standardized tests are wonderful models to use once we have embraced these assumptions. Cultivating independent thinking isn’t really on the radar.
Though I tell them that writing assessments or projects (as referenced in the article) are better vehicles to demonstrate independent thinking. Like 2
I would agree with you Dan. Project work has the advantage that it is conducted over a period of time, during which a range of skills can be exhibited, and, typically, the teacher can form a better judgement of the student’s capacity for thinking their way through a problem. Exams, being a snapshot, are limited in this regard and the assessment of factual recall tends to be to the fore, as opposed to capacities for reflection, questioning of assumptions, exploration of creative new options, and so on. I think too that we could make more use of the viva; in my experience, asking a student to talk for a few minutes is an excellent way of gauging the depth of their understanding Like 1
Teaching people the ability to think is more important than passing tests. What is important is the ability of people to think for themselves and to attain understanding. Not to simply unthinkingly churn out what others have said. Like 0
Richard P. Phelps
Again, how do you measure that? How can a parent or taxpayer know that their children are better off for having gone to school? How do you prove or demonstrate that a child is now better able to think than before? Like 1
Richard, therein lies the dilemma – the need for people to measure rather than believe. If we stopped being obsessed with measuring and categorising so deeply everything we do, we would be in a better position. You should only need to talk to a child to know that they have learned to think. Maybe we don’t have time to do that. Like 0
I would ask them to read “An Atom or a Nucleus?” It takes the position that the thing that has virtually all the mass of the atom, and which accounts for all the properties of the atom, is actually the atom itself, not some sort of “nucleus” of something. This goes contrary to what we have been taught for the past 100 years.
This is supposedly “hard science” physics. But it raises deeply disturbing questions about Pavlovian style education.
The link is http://scripturalphysics.org/4v4a/ATMORNUC.html (Take the test at the end of the article)
If we are wrong about the atom “having” a nucleus, we could be wrong about A LOT of things, even in the “objective sciences”. Like 0
I think most parents want what is best for their children. I don’t think anyone wants their child to be a little robot who can take a test but not navigate through life and all its challenges. And if they do, that’s just sad. It should be noted that the author did not say we should do away with examinations. In fact, they said this kind of class increases performance on examinations, and I have first hand experience with that since I teach a class after school, on a volunteer basis, that also uses a discussion format. Our program has also helped improve test scores among students that took it (and this in a lower income neighborhood) and we have data to prove it. So the results will show, I have confidence in that.
But there is an easy way parents can know what their kids are learning in school. They can just talk to them. And these kids actually want to be in my classroom. One time, I was going to cancel class because my co-facilitator did not show up and she had all the materials. But the kids, and this is, let me remind you, AFTER school, came trailing into an empty classroom with their chairs and started setting up. I told them they had the day off, they could go play. They kept on setting up and said they wanted to have the class anyway and since I was there I might as well do it. This kids wanted to be there. These are regular kids by the way, chosen at random by the after school program. They wanted to be in that class because we have great discussions. These discussions are not random though, the questions are carefully chosen based on a curriculum that has been scientifically validated, and we guide the discussion along to make sure it goes somewhere productive. We don’t take a fully Socratic approach, we have a mixed teaching and discussion style. The classes are about an hour and a half long. And I’ve had parents come up to me many times and thank me personally because they have seen their children change after taking my program. So if kids are interested and engaged in school, they will talk to and tell you about it if you ask. Because they are interested, and kids, like all people, like speaking about things they are interested in. Like 2
Nice for the teacher, nice for the children, nice for society as a whole that we are educating people to think for themselves. Like 0
Richard P. Phelps
“we are educating people to think for themselves” How do you know you are? How do you measure it? Like 0
This type of teaching takes a great deal of preparation, and I would say it is actually far more challenging for a teacher to guide and direct students towards answers and valuable discussion than to spout out the answers themselves. The teacher who looks like they are doing very little, and manages to guide students to a point where they have learnt something, is an outstanding teacher – they pull the strings, and the students are guided into finding the answers themselves: students feel fantastic because they did it ‘on their own’, and, because they did the legwork instead of writing down an answer they were told, it sticks in their mind for much longer. Like 1
Digital Diogenes Aus
Teaching to the test is easy.
Sure, its stressful and a lot of work, but it’s a lot of grunt work.
Teaching in the Socratic fashion is hard- you actually have to know what you’re talking about, you have to know your kids, and you have to consistently stay ahead of the curve Like 1
In a previous post, I summarized recent Form 990s—the financial reporting documents required of large US non-profits by the Internal Revenue Service—filed by three organizations. The Thomas B. Fordham Institute, the Alliance for Excellent Education, and the National Center on Education and the Economy were and are paid handsomely to promote the Common Core Standards and affiliated programs.
Here, I review Form 990s for three more Common Core-connected organizations—Achieve, The Council of Chief State School Officers (CCSSO), and PARCC, Inc.
PARCC, the acronym for Partnership for Assessment of Readiness for College and Careers, represents one of two Common Core-affiliated testing consortia. I attempted to find Form 990s for the other testing consortium, Smarter-Balanced, but failed. They would appear to be very well hidden, inside the labyrinthine accounting structure of either the University of California-Los Angeles (UCLA) or the University of California system.
The most recently available documents online for each organization included below emanate from either the 2013 or 2014 tax and calendar year. According to Achieve’s filing, it spun off PARCC, Inc. as “an unrelated entity” exactly midway through 2014.
Now for the salaries…
Achieve2013 – Achieve claimed four program activities for the year, all associated with “college and career ready initiatives”. Six employees, including President Michael Cohen and Senior Math Associate Laura Slover, received financial compensation in excess of $200,000, and twenty in excess of $100,000. Another $195,000 went to Common Core Standards writer Sue Pimentel living up in New Hampshire, as “consultant”. Public Opinion Strategies received over $175,000 for “research”. “Council of State Science Supervisor” “consultants” collectively absorbed half a million.
Oddly, Achieve listed zero expenses for “lobbying” and “advertising and promotion”. Instead, it categorized almost $5 million under “Other professional fees”. Almost a million each was spent on travel and “conferences, conventions, and meetings.”
Council of Chief State School Officers
CCSSO2014 – CCSSO received over $2.5 million in member dues, primarily from states paying for places at the table for their state chief education officers. Not many years ago, these dues, plus whatever surplus income it kept from annual meeting registrations, paid its rent and salaries.
In 2014, however, “contracts, grants, & sponsorships” income exceeded $31 million, twelve times the amount from dues and meetings. CCSSO in its current form could easily survive a loss of member dues payments; it could not survive a loss of contracts and grants—read Common Core promotion payments. The tail now wags the dog.
Twenty-six CCSSO staffers received salaries in excess of $100,000 annually. At least another six took home more than $200,000. The CEO, Chris Minnich, got more than a quarter million. Over half a million was claimed for “lobbying to influence a legislative body (direct lobbying)”, but $0 as “lobbying to influence public opinion (grass roots lobbying).” Yet, at another juncture, a “grassroots nontaxable amount” of $250,000 is declared.
CCSSO spent over $8 million on travel in 2014, more than on salaries and wages.
So much money flows through CCSSO that it earned almost a quarter million dollars from investments alone in 2014.
PARCC2014 – According to Achieve, PARCC, Inc. began life on July 1, 2014. Nonetheless, its top officers seem to have earned healthy annual salaries: seven in excess of $100,000 and two in excess of $200,000. Laura Slover, last seen above as Senior Math Associate at Achieve in 2013, became CEO of PARCC, Inc. in 2014, with over a quarter million in salary. PARCC spent $1.242 million on travel in 2014.
PARCC’s revenue consisted of $66 million in government grants, and $0.6 million from everywhere else. PARCC’s expenses comprised $34.8 million to NCS Pearson and $6.4 million to ETS for test development, and $1.3 million to Rupert Murdoch’s and Joel Klein’s Amplify Education and $0.8 million to Breakthrough Technologies for IT work.
As this public school year begins, districts across California are reporting student performance on new exams based on California’s adaptation of the controversial Common Core federal standards. Students and parents have good reason to be anxious about the newly released scores now and for years to come.
The first thing we are told by state officials is that the exams are based on “more rigorous Common Core academic standards.” In many states, the remark would be correct. But in California, especially in mathematics, the exact opposite is true.
California and Massachusetts had the best state standards in the country and we have both lost them along with the excellent CSTs (California Standards Tests) and each school’s API (Academic Progress Indicator). The API’s two 1-10 scores were based on the school’s CSTs — collective student performance — against all California schools and also against 100 comparable schools. Although simplistic, these were amazingly effective. They were incomparably better than the new color-coded “scores” that interested observers will not understand, probably by design.
There is a widely held misconception that multiple-choice tests are misinforming because it is “easy for students to guess answers.” This fact ignores the reality that all students are in the same boat, with strong students having a better opportunity to demonstrate what they know.
As described by the officials, the new test requires students to answer follow-up questions and perform a task that shows their research and problem-solving skills. Nice as this sounds, reality is that it makes the mathematics tests far more verbal. Any student with weak reading and writing skills is unfairly assessed. That is especially problematic for English learners.
Low socio-economic Latino kids will be further burdened in demonstrating their mathematics competence, and Chinese or Korean immigrants who are a couple of years ahead mathematically (as was my daughter-in-law when she immigrated as a fifth-grader from Korea) will be told their mathematics competence is deficient. Absolutely absurd. Mathematics carried her for a couple of years until her English became good enough for academic work in other subjects.
The Common Core math standards, and the misguided philosophy of mathematics education behind them, are the heart of the problem. The new assessments simply reflect them. They say mathematics is best learned through students’ exploration of lengthy “real world” problems rather than the artificial setting of a competent teacher teaching a concept followed by straightforward applications thereof.
Reality is that traditional (albeit contrived) word problems lead to better retention and use of the mathematics involved. Comparison with the highly effective Singapore Primary Math Series is illustrative.
Another misconception of teachers and assessment “experts” is that Common Core expects students to use nonstandard arithmetic algorithms. These are often used in place of the familiar ones; e.g., borrow/carry in subtraction/addition and vertical multiplication with its place-value shift with successive digits. Stephen Colbert’s delightful derision, which you can find by googling Colbert and Common Core, provides an example of that parental frustration.
Hard as it is to believe, one of the top three guides for the national math standards, and the sole guide for California’s new exams from the Smarter Balanced Assessment Consortium, has no degree in mathematics; his degree is in English literature.
Moreover, both the corresponding curricula and these less meaningful assessments are exactly what the Math Wars of the 1990s were about.The former standards that came out in late 1997 were written by a subgroup of the Stanford mathematics faculty and were based on the goal of making eighth-grade algebra a realistic opportunity for all California students, not just those whose parents can afford a good private school.
The idea that the Common Core standards and associated assessments are more rigorous and provide greater opportunities for California students is based on ignorance or, worse, is completely disingenuous.
It makes the mathematics tests far more verbal. Any student with weak reading and writing skills is unfairly assessed. That is especially problematic for English learners.
Wayne Bishop is a professor of mathematics at Cal State Los Angeles.
*Originally published in the San Gabriel Valley [Los Angeles] Tribune, 2 September, 2016
It looks like a recent, very problematic report from Johns Hopkins University, “For All Kids, How Kentucky is Closing the High School Graduation Gap for Low-Income Students,” is likely to get pushed well beyond the Bluegrass State’s borders.
The publishers just announced a webinar on this report for August 30th.
Anyway, you need to get up to speed on why this report is build on a foundation of sand. You can do that fairly quickly by checking these blogs:
A third blog will release at 8 am Eastern tomorrow. It will probably link at
I won’t know for sure until it releases, however.
Let me know if you have questions and especially if this Hopkins report starts making the rounds in your state.
In scholarly terms, a review of the literature or literature review is a summation of the previous research conducted on a particular topic. With a dismissive literature review, a researcher assures the public that no one has yet studied a topic or that very little has been done on it. Dismissive reviews can be accurate, for example with genuinely new scientific discoveries or technical inventions. But, often, and perhaps usually, they are not.
A recent article in the Nonpartisan Education Review includes hundreds of statements—dismissive reviews—of some prominent education policy researchers.* Most of their statements are inaccurate; perhaps all of them are misleading.
“Dismissive review”, however, is the general term. In the “type” column of the files linked to the article, a finer distinction is made among simply “dismissive”—meaning a claim that there is no or little previous research, “denigrating”—meaning a claim that previous research exists but is so inferior it is not worth even citing, and “firstness”—a claim to be the first in the history of the world to ever conduct such a study. Of course, not citing previous work has profound advantages, not least of which is freeing up the substantial amount of time that a proper literature review requires.
By way of illustrating the alacrity with which some researchers dismiss others’ research as not worth looking for, I list the many terms marshaled for the “denigration” effort in the table below. I suspect that in many cases, the dismissive researcher has not even bothered to look for previous research on the topic at hand, outside his or her small circle of colleagues.
Regardless, the effect of the dismissal, particularly when coming from a highly influential researcher, is to discourage searches for others’ work, and thus draw more attention to the dismisser. One might say that “the beauty” of a dismissive review is that rival researchers are not cited, referenced, or even identified, thus precluding the possibility of a time-consuming and potentially embarrassing debate.
Just among the bunch of high-profile researchers featured in the Nonpartisan Education Review article, one finds hundreds of denigrating terms employed to discourage the public, press, and policymakers from searching for the work done by others. Some in-context examples:
To consolidate the mass of verbiage somewhat, I group similar terms in the table below.
(Frequency) Denigrating terms used for other research
(43) [not] ‘systematic’; ‘aligned’; ‘detailed’; ‘comprehensive’; ‘large-scale’; ‘cross-state’; ‘sustained’; ‘thorough’
(31) [not] ‘empirical’; ‘research-based’; ‘scholarly’
(29) ‘limited’; ‘selective’; ‘oblique’; ‘mixed’; ‘unexplored’
(19) ‘small’; ‘scant’; ‘sparse’; ‘narrow’; ‘scarce’; ‘thin’; ‘lack of’; ‘handful’; ‘little’; ‘meager’; ‘small set’; ‘narrow focus’
(15) [not] ‘hard’; ‘solid’; ‘strong’; ‘serious’; ‘definitive’; ‘explicit’; ‘precise’
(14) ‘weak’; ‘weaker’; ‘challenged’; ‘crude’; ‘flawed’; ‘futile’
(9) ‘anecdotal’; ‘theoretical’; ‘journalistic’; ‘assumptions’; ‘guesswork’; ‘opinion’; ‘speculation’; ‘biased’; ‘exaggerated’
(8) [not] ‘rigorous’
(8) [not] ‘credible’; ‘compelling’; ‘adequate’; ‘reliable’; ‘convincing’; ‘consensus’; ‘verified’
(7) ‘inadequate’; ‘poor’; ‘shortcomings’; ‘naïve’; ‘major deficiencies’; ‘futile’; ‘minimal standards of evidence’
(5) [not] ‘careful’; ‘consistent’; ‘reliable’; ‘relevant’; ‘actual’
(4) [not] ‘clear’; ‘direct’
(4) [not] ‘high quality’; ‘acceptable quality’; ‘state of the art’
(4) [not] ‘current’; ‘recent’; ‘up to date’; ‘kept pace’
(4) ‘statistical shortcomings’; ‘methodological deficiencies’; ‘individual student data, followed school to school’; ‘distorted’
(2) [not] ‘independent’; ‘diverse’
As well as illustrating the facility with which some researchers denigrate the work of rivals, the table summary also illustrates how easy it is. Hundreds of terms stand ready for dismissing entire research literatures. Moreover, if others’ research must satisfy the hundreds of sometimes-contradictory characteristics used above simply to merit acknowledgement, it is not surprising that so many of the studies undertaken by these influential researchers are touted as the first of a kind.
* Phelps, R.P. (2016). Dismissive reviews in education policy research: A list. Nonpartisan Education Review/Resources/DismissiveList.htm
Some say that now is a wonderful time to be a psychometrician — a testing and measurement professional. There are jobs aplenty, with high pay and great benefits. Work is available in the private sector at test development firms; in recruiting, hiring, and placement for corporations; in public education agencies at all levels of government; in research and teaching at universities; in consulting; and many other spots.
Moreover, there exist abundant opportunities to work with new, innovative, “cutting edge”, methods, techniques, and technologies. The old, fuddy-duddy, paper-and-pencil tests with their familiar multiple-choice, short-answer, and essay questions are being replaced by new-fangled computer-based, internet-connected tests with graphical interfaces and interactive test item formats.
In educational testing, the Common Core Standards Initiative (CCSI), and its associated tests, developed by the Smarter-Balanced Assessment Consortium (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC), has encouraged the movement toward “21st century assessments”. Much of the torrential rain of funding, burst forth from federal and state governments and clouds of wealthy foundations, has pooled in the pockets of psychometricians.
At the same time, however, the country’s most authoritative psychometricians—the very people who would otherwise have been available to guide, or caution against, the transition to the newer standards and tests—have been co-opted. In some fashion or another, they now work for the CCSI. Some work for the SBAC or PARCC consortia directly, some work for one or more of the many test development firms hired by the consortia, some help the CCSI in other capacities. Likely, they have all signed confidentiality agreements (i.e., “gag orders”).
Psychometricians who once had been very active in online chat rooms or other types of open discussion forums on assessment policy no longer are, unless to proffer canned promotions for the CCSI entities they now work for. They are being paid well. They may be doing work they find new, interesting, and exciting. But, with their loss of independence, society has lost perspective.
Perhaps the easiest vantage point from which to see this loss of perspective is in the decline of adherence to test development quality standards, those that prescribe the behavior of testing and measurement professionals themselves. Over the past decade, for example, the International Test Commission (ITC) alone has developed several sets of standards.
Perhaps the oldest set of test quality standards was established originally by the American Psychological Association (APA) and was updated most recently in 2014—the Standards for Educational and Psychological Testing (AERA, NCME, APA). It contains hundreds of individual standards. The CCSI as a whole, and the SBAC and PARCC tests in particular, fail to meet many of them.
The problem starts with what many professionals consider the testing field’s “prime directive”—Standard 1.0 (AERA, NCME, APA, p.23). It reads as follows:
“Clear articulation of each intended test score interpretation for a specified use should be set forth, and appropriate validity evidence in support of each intended interpretation should be provided.”
That is, a test should be validated for each purpose for which it is intended to be used before it is used for that purpose. Before it is used to make important decisions. And, before it is advertised as serving that purpose.
Just as states were required by the Race to the Top competition for federal funds to accept Common Core standards before they had even been written, CCSI proponents have boasted about their new consortium tests’ wonderful benefits since before test development even began. They claimed unproven qualities about then non-existent tests because most CCSI proponents do not understand testing, or they are paid not to understand.
In two fundamental respects, the PARCC and SBAC tests will never match their boosters’ claims nor meet basic accepted test development standards. First, single tests are promised to measure readiness for too many and too disparate outcomes—college and careers—that is, all possible futures. It is implied that PARCC and SBAC will predict success in art, science, plumbing, nursing, carpentry, politics, law enforcement …any future one might wish for.
This is not how it is done in educational systems that manage multiple career pathways well. There, in Germany, Switzerland, Japan, Korea, and, unfortunately, few jurisdictions in the U.S., a range of different types of tests are administered, each appropriately designed for their target professions. Aspiring plumbers take plumbing tests. Aspiring medical workers take medical tests. And, those who wish to prepare for more advanced degrees might take more general tests that predict their aptitude to succeed in higher education institutions.
But that isn’t all. SBAC and PARCC are said to be aligned to the K-12 Common Core standards, too. That is, they both summarize mastery of past learning and predict future success. One test purports to measure how well students have done in high school, and how well they will do in either the workplace or in college, three distinctly different environments, and two distinctly different time periods.
PARCC and SBAC are being sold as replacements for state high school exit exams, for 4-year college admission tests (e.g., the SAT and ACT), for community college admission tests (e.g., COMPASS and ACCUPLACER), and for vocational aptitude tests (e.g., ASVAB). Problem is, these are very different types of tests. High school exit exams are generally not designed to measure readiness for future activity but, rather, to measure how well students have learned what they were taught in elementary and secondary schools. We have high school exit exams because citizens believe it important for their children to have learned what is taught there. Learning Civics well in high school, for example, may not correlate highly with how well a student does in college or career but many nonetheless consider it important for our republic that its citizens learn the topic
High school exit exams are validated by their alignment with the high school curriculum, or content standards. By contrast, admission or aptitude tests are validated by their correlation with desired future outcomes—grades, persistence, productivity, and the like in college—their predictive validity. In their pure, optimal forms, a high school exit exam, a college admission test, and vocational aptitude tests bear only a slight resemblance to each other. They are different tests because they have different purposes and, consequently, require different validations.
Let’s assume for the moment that the Common Core consortia tests, PARCC and SBAC, can validly measure all that is claimed for them—mastery of the high school curriculum and success in further education and in the workplace. The fact is no evidence has yet been produced that verifies any of these things. And, remember, the proof of, and the claims about, a new test’s virtues are supposed to be provided before the test is used purposefully.
Sure, Common Core proponents claim to have just recently validated their consortia tests for correlation with college outcomes , for alignment with elementary and secondary school content standards, and for technical quality . The clumsy studies they cite do not match the claims made for them, however.
SBAC and PARCC cannot be validated for their purpose of predicting college and career readiness until data are collected in the years to come on the college and career outcomes of those who have taken the tests in high school. The study cited by Common Core proponents uses the words “predictive validity” in its title. Only in the fine print does one discover that, at best, the study measured “concurrent” validity—high school tests were administered to current rising college sophomores and compared to their freshman-year college grades. Calling that “predictive validity” is, frankly, dishonest.
It might seem less of a stretch to validate SBAC and PARCC as high school exit exam replacements. After all, supposedly they are aligned to the Common Core Standards so in any jurisdiction where the Common Core Standards prevail, they would be retrospectively aligned to the high school curriculum. Two issues tarnish this rosy picture. First, the Common Core Standards are subjectively narrow, just mathematics and English Language Arts, with no attention paid to the majority of the high school curriculum.
Second, common adherence to the Common Core Standards across the States has deteriorated to the point of dissolution. As the Common Core consortia’s grip on compliance (i.e., alignment) continues to loosen, states, districts within states, and schools within districts are teaching how they want and what they want. The less aligned Common Core Standards become, the less valid the consortium tests become as measures of past learning.
As for technical quality, the Fordham Institute, which is paid handsomely by the Bill & Melinda Gates Foundation to promote Common Core and its consortia tests, published a report which purports to be an “independent” comparative standards alignment study. Among its several fatal flaws: instead of evaluating tests according to the industry standard Standards for Educational and Psychological Testing, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employed “a brand new methodology” specifically developed for Common Core and its copyright owners, and paid for by Common Core’s funders.
Though Common Core consortia test sales pitches may be the most disingenuous, SAT and ACT spokespersons haven’t been completely forthright either. To those concerned about the inevitable degradation of predictive validity if their tests are truly aligned to the K-12 Common Core standards, public relations staffs assure us that predictive validity is a foremost consideration. To those concerned about the inevitable loss of alignment to the Common Core standards if predictive validity is optimized, they assure complete alignment.
So, all four of the test organizations have been muddling the issue. It is difficult to know what we are going to get with any of the four tests. They are all straddling or avoiding questions about the trade-offs. Indeed, we may end up with four, roughly equivalent, muddling tests, none of which serve any of their intended purposes well.
This is not progress. We should want separate tests, each optimized for a different purpose, be it measuring high school subject mastery, or predicting success in 4-year college, in 2-year college, or in a skilled trade. Instead, we may be getting several one-size-fits-all, watered-down tests that claim to do all but, as a consequence, do nothing well. Instead of a skilled tradesperson’s complete tool set, we may be getting four Swiss army knives with roughly the same features. Instead of exploiting psychometricians’ advanced knowledge and skills to optimize three or more very different types of measurements, we seem to be reducing all of our nationally normed end-of-high-school tests to a common, generic muddle.
McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/
Nichols-Barrer, I., Place, K., Dillon, E., & Gill, B. (2015, October 5). Final Report: Predictive Validity of MCAS and PARCC: Comparing 10th Grade MCAS Tests to PARCC Integrated Math II, Algebra II, and 10th Grade English Language Arts Tests. Cambridge, MA: Mathematica Policy Research. http://econpapers.repec.org/paper/mprmprres/a2d9543914654aa5b012e4a6d2dae060.htm
Phelps, R.P. (2016, February). Fordham Institute’s pretend research. Policy Brief. Boston: Pioneer Institute. http://pioneerinstitute.org/featured/fordhams-parcc-mcas-report-falls-short/
American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME). (2014). Standards for Educational and Psychological Testing, Washington, DC: AERA.
Linked are copies of Form 990s for Marc Tucker’s National Center for Education and the Economy (NCEE), Checker Finn’s Fordham Foundation and Fordham Institute, and Bob Wise’s Alliance for Excellent Education (AEE). Each pays himself and at least one other well.
All non-profit organizations with revenues exceeding $50,000 must file Form 990s annually with the Internal Revenue Service. And, in return for the non-profits’ tax-exempt status, their Form 990s are publicly available.
As to salaries…
National Center for Education and the Economy
NCEE2013Form990 – Marc Tucker pays himself $501,087, and six others receive from $162k to $379k (p.40 of 48); his son, Joshua Tucker receives $214,813 (p. 42)
…also interesting: p.16 (contrast with p.15), pp. 19, 27, 37
Alliance for Excellent Education
AEE2013Form990 – Bob Wise pays himself $384,325, and six others receive from $162k to $227k. (see p.27 of 36)
…also interesting: p.24 (“Madoff Recovery”)
Thomas B. Fordham Foundation & Institute
FordhamF2013Form990 & FordhamI2013Form990 – With both a “Foundation” and an “Institute”, Checker Finn and Mike Petrilli can each pay themselves about $100k, twice. (see p.25 of 42)
…also interesting: p.19 ($29million in investments; $1.5million for an interest rate swap); p.37 (particularly the two entries for “Common Sense Offshore, Ltd.)