A New Core

The Concord Review
December 2, 2016

Dinosaur scholars like Mark Bauerlein argue that the decline in the humanities in our universities is caused by their retreat from their own best works—literature departments no longer celebrate great literature, history departments no longer offer great works of history to students to read, and so on.

However, an exciting new article by Nicholas Lemann in The Review from The Chronicle of Higher Education, while it shares some concerns about the decline of the humanities, proposes an ingenious modern new Core, which would…

“put methods above subject-matter knowledge in the highest place of honor, and they treat the way material is taught as subsidiary to what is taught…”

In this new design, what is taught is methods, not knowledge—of history, literature, languages, philosophy and all that…

Here is a list of the courses Professor Lemann recommends:

Information Acquisition
Cause and Effect
Interpretation
Numeracy
Perspective
The Language of Form
Thinking in Time
Argument

And he says that: “What these courses have in common is a primary commitment to teaching the rigorous (and also properly humble) pursuit of knowledge.”

At last we can understand that the purpose of higher education in the humanities should be the pursuit of knowledge, and not actually to catch up with any of it. We may thus enjoy a new generation of mentally “fleet-footed” ignoramuses who have skipped the greatness of the humanities in the chase for methods and skills of various kinds. This approach is as hollow and harmful as it was in the 1980s, when Harvard College tried to design a knowledge-free, methods-filled Core Curriculum, so it seems that what comes around does indeed come around, but still students are neither learning from or enjoying the greatness of the humanities in college much these days…

——————-

“Teach with Examples”
Will Fitzhugh [Founder]
The Concord Review [1987]
Ralph Waldo Emerson Prizes [1995]
National Writing Board [1998]
TCR Academic Coaches [2014]
730 Boston Post Road, Suite 24
Sudbury, Massachusetts 01776-3371 USA
978-443-0022
www.tcr.org; fitzhugh@tcr.org
Varsity Academics®
tcr.org/bookstore
www.tcr.org/blog

Posted in Common Core, Education Reform, Higher Education, Humanities, Will Fitzhugh | Tagged , , , | Leave a comment

Yes, President Trump can do something about Common Core

For starters, he can shut down the federal funding of organizations that have supplied the misinformation that begat and continues to propagandize Common Core. While the Gates Foundation gets the most attention, government-funded entities play their part. For example, our nation could be much improved if relieved of the burden of fuzzy research produced at the Center for Research on Educational Standards and Student Testing (CRESST), the Board on Testing and Assessment (BOTA) at the National Research Council, and K-12 programs in the Education and Human Resources (EHR) Division of the National Science Foundation. All have been captured by education’s vested interests, and primarily serve them.

Posted in Common Core, Education policy, K-12, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , , , , | 1 Comment

Among the Constructivists

The online journal Aeon posted (6 October, 2016) The Examined Life, by John Taylor, director of Learning, Teaching and Innovation at Cranleigh boarding school in Surrey (U.K.).

https://aeon.co/essays/can-school-today-teach-anything-more-than-how-to-pass-exams

Taylor advocates “independent learning” in describing his “ideal classroom”:

“The atmosphere in the class is relaxed, collaborative, enquiring; learning is driven by curiosity and personal interest. The teacher offers no answers but instead records comments on a flip-chart as the class discusses. Nor does the lesson end with an answer. In fact it doesn’t end when the bell goes: the students are still arguing on the way out.”

As for what he sees as the currently dominant alternative:

“Students are working harder than ever to pass tests but schools allow no time for true learning in the Socratic tradition.”

“Far from being open spaces for free enquiry, the classroom of today resembles a military training ground, where students are drilled to produce perfect answers to potential examination questions.”

…You get the drift.

A bit sarcastically, I write in the Comments section

“So, the ideal class is the one in which the teacher does the least amount of work possible. How nice …for the teacher.”

To my surprise, other readers respond. I find the responses interesting. (Numbers of “Likes” current as of 9 October, 2016.)

Richard Phelps
So, the ideal class is the one in which the teacher does the least amount of work possible. How nice …for the teacher.     Like 0

Dan Fouts
If only it were like that! The ideal classroom described in this article would be led by a teacher who does a very different kind of work– coaching others to think rather than dictating everything–Being patient with confusion rather than rushing to answers– Discarding pre-determined outcomes and instead promoting outcomes that reveal themselves within lessons. This is very difficult, time-consuming teacher work.     Like 2

Richard Phelps
One purpose for tests is as an indicator to parents and taxpayers that their children are learning something. How would you convince parents and taxpayers that students have “learned how to think”? I presume that there is no test for that, and that you might not want to use it even if it existed, as that could induce “teaching to the test”. So, what would you tell them?     Like 1

Dan Fouts
Great point and questions. Therein lies the challenge. Since thinking itself is a mental process, it eludes empirical measurement in a very real way. We are in an education system that places value on things only if students can show they can DO something (this is the behaviorist model) and only if what they do is measurable using the language of mathematics. Standardized tests are wonderful models to use once we have embraced these assumptions. Cultivating independent thinking isn’t really on the radar.

Though I tell them that writing assessments or projects (as referenced in the article) are better vehicles to demonstrate independent thinking.     Like 2

John Taylor
I would agree with you Dan. Project work has the advantage that it is conducted over a period of time, during which a range of skills can be exhibited, and, typically, the teacher can form a better judgement of the student’s capacity for thinking their way through a problem. Exams, being a snapshot, are limited in this regard and the assessment of factual recall tends to be to the fore, as opposed to capacities for reflection, questioning of assumptions, exploration of creative new options, and so on. I think too that we could make more use of the viva; in my experience, asking a student to talk for a few minutes is an excellent way of gauging the depth of their understanding     Like 1

Ian Wardell
Teaching people the ability to think is more important than passing tests. What is important is the ability of people to think for themselves and to attain understanding. Not to simply unthinkingly churn out what others have said.     Like 0

Richard P. Phelps
Again, how do you measure that? How can a parent or taxpayer know that their children are better off for having gone to school? How do you prove or demonstrate that a child is now better able to think than before?     Like 1

Amritt Flora
Richard, therein lies the dilemma – the need for people to measure rather than believe. If we stopped being obsessed with measuring and categorising so deeply everything we do, we would be in a better position. You should only need to talk to a child to know that they have learned to think. Maybe we don’t have time to do that.     Like 0

Brian Fraser
I would ask them to read “An Atom or a Nucleus?” It takes the position that the thing that has virtually all the mass of the atom, and which accounts for all the properties of the atom, is actually the atom itself, not some sort of “nucleus” of something. This goes contrary to what we have been taught for the past 100 years.

This is supposedly “hard science” physics. But it raises deeply disturbing questions about Pavlovian style education.

The link is http://scripturalphysics.org/4v4a/ATMORNUC.html (Take the test at the end of the article)

If we are wrong about the atom “having” a nucleus, we could be wrong about A LOT of things, even in the “objective sciences”.     Like 0

Young Thug
I think most parents want what is best for their children. I don’t think anyone wants their child to be a little robot who can take a test but not navigate through life and all its challenges. And if they do, that’s just sad. It should be noted that the author did not say we should do away with examinations. In fact, they said this kind of class increases performance on examinations, and I have first hand experience with that since I teach a class after school, on a volunteer basis, that also uses a discussion format. Our program has also helped improve test scores among students that took it (and this in a lower income neighborhood) and we have data to prove it. So the results will show, I have confidence in that.

But there is an easy way parents can know what their kids are learning in school. They can just talk to them. And these kids actually want to be in my classroom. One time, I was going to cancel class because my co-facilitator did not show up and she had all the materials. But the kids, and this is, let me remind you, AFTER school, came trailing into an empty classroom with their chairs and started setting up. I told them they had the day off, they could go play. They kept on setting up and said they wanted to have the class anyway and since I was there I might as well do it. This kids wanted to be there. These are regular kids by the way, chosen at random by the after school program. They wanted to be in that class because we have great discussions. These discussions are not random though, the questions are carefully chosen based on a curriculum that has been scientifically validated, and we guide the discussion along to make sure it goes somewhere productive. We don’t take a fully Socratic approach, we have a mixed teaching and discussion style. The classes are about an hour and a half long. And I’ve had parents come up to me many times and thank me personally because they have seen their children change after taking my program. So if kids are interested and engaged in school, they will talk to and tell you about it if you ask. Because they are interested, and kids, like all people, like speaking about things they are interested in.     Like 2

Ian Wardell
Nice for the teacher, nice for the children, nice for society as a whole that we are educating people to think for themselves.     Like 0

Richard P. Phelps
“we are educating people to think for themselves” How do you know you are? How do you measure it?     Like 0

Nicola Williams
This type of teaching takes a great deal of preparation, and I would say it is actually far more challenging for a teacher to guide and direct students towards answers and valuable discussion than to spout out the answers themselves. The teacher who looks like they are doing very little, and manages to guide students to a point where they have learnt something, is an outstanding teacher – they pull the strings, and the students are guided into finding the answers themselves: students feel fantastic because they did it ‘on their own’, and, because they did the legwork instead of writing down an answer they were told, it sticks in their mind for much longer.     Like 1

Digital Diogenes Aus
Teaching to the test is easy.
Sure, its stressful and a lot of work, but it’s a lot of grunt work.
Teaching in the Socratic fashion is hard- you actually have to know what you’re talking about, you have to know your kids, and you have to consistently stay ahead of the curve     Like 1

Posted in Education policy, K-12, Richard P. Phelps, Testing/Assessment | Tagged , , , , , , , , , , , | Leave a comment

More Common Core salespersons’ salaries

In a previous post, I summarized recent Form 990s—the financial reporting documents required of large US non-profits by the Internal Revenue Service—filed by three organizations. The Thomas B. Fordham Institute, the Alliance for Excellent Education, and the National Center on Education and the Economy were and are paid handsomely to promote the Common Core Standards and affiliated programs.

Here, I review Form 990s for three more Common Core-connected organizations—Achieve, The Council of Chief State School Officers (CCSSO), and PARCC, Inc.

PARCC, the acronym for Partnership for Assessment of Readiness for College and Careers, represents one of two Common Core-affiliated testing consortia. I attempted to find Form 990s for the other testing consortium, Smarter-Balanced, but failed. They would appear to be very well hidden, inside the labyrinthine accounting structure of either the University of California-Los Angeles (UCLA) or the University of California system.

The most recently available documents online for each organization included below emanate from either the 2013 or 2014 tax and calendar year. According to Achieve’s filing, it spun off PARCC, Inc. as “an unrelated entity” exactly midway through 2014.

Now for the salaries…

Achieve, Inc.
Achieve2013 – Achieve claimed four program activities for the year, all associated with “college and career ready initiatives”. Six employees, including President Michael Cohen and Senior Math Associate Laura Slover, received financial compensation in excess of $200,000, and twenty in excess of $100,000. Another $195,000 went to Common Core Standards writer Sue Pimentel living up in New Hampshire, as “consultant”. Public Opinion Strategies received over $175,000 for “research”. “Council of State Science Supervisor” “consultants” collectively absorbed half a million.

Oddly, Achieve listed zero expenses for “lobbying” and “advertising and promotion”. Instead, it categorized almost $5 million under “Other professional fees”. Almost a million each was spent on travel and “conferences, conventions, and meetings.”

Council of Chief State School Officers
CCSSO2014 – CCSSO received over $2.5 million in member dues, primarily from states paying for places at the table for their state chief education officers. Not many years ago, these dues, plus whatever surplus income it kept from annual meeting registrations, paid its rent and salaries.

In 2014, however, “contracts, grants, & sponsorships” income exceeded $31 million, twelve times the amount from dues and meetings. CCSSO in its current form could easily survive a loss of member dues payments; it could not survive a loss of contracts and grants—read Common Core promotion payments. The tail now wags the dog.

Twenty-six CCSSO staffers received salaries in excess of $100,000 annually. At least another six took home more than $200,000. The CEO, Chris Minnich, got more than a quarter million. Over half a million was claimed for “lobbying to influence a legislative body (direct lobbying)”, but $0 as “lobbying to influence public opinion (grass roots lobbying).” Yet, at another juncture, a “grassroots nontaxable amount” of $250,000 is declared.

CCSSO spent over $8 million on travel in 2014, more than on salaries and wages.

So much money flows through CCSSO that it earned almost a quarter million dollars from investments alone in 2014.

PARCC, Inc.
PARCC2014 – According to Achieve, PARCC, Inc. began life on July 1, 2014. Nonetheless, its top officers seem to have earned healthy annual salaries: seven in excess of $100,000 and two in excess of $200,000. Laura Slover, last seen above as Senior Math Associate at Achieve in 2013, became CEO of PARCC, Inc. in 2014, with over a quarter million in salary. PARCC spent $1.242 million on travel in 2014.

PARCC’s revenue consisted of $66 million in government grants, and $0.6 million from everywhere else. PARCC’s expenses comprised $34.8 million to NCS Pearson and $6.4 million to ETS for test development, and $1.3 million to Rupert Murdoch’s and Joel Klein’s Amplify Education and $0.8 million to Breakthrough Technologies for IT work.

Posted in Common Core, Education policy, Education Reform, Richard P. Phelps, Testing/Assessment | Leave a comment

Does Common Core add up for California’s math students?*

As this public school year begins, districts across California are reporting student performance on new exams based on California’s adaptation of the controversial Common Core federal standards. Students and parents have good reason to be anxious about the newly released scores now and for years to come.

The first thing we are told by state officials is that the exams are based on “more rigorous Common Core academic standards.” In many states, the remark would be correct. But in California, especially in mathematics, the exact opposite is true.

California and Massachusetts had the best state standards in the country and we have both lost them along with the excellent CSTs (California Standards Tests) and each school’s API (Academic Progress Indicator). The API’s two 1-10 scores were based on the school’s CSTs — collective student performance — against all California schools and also against 100 comparable schools. Although simplistic, these were amazingly effective. They were incomparably better than the new color-coded “scores” that interested observers will not understand, probably by design.

There is a widely held misconception that multiple-choice tests are misinforming because it is “easy for students to guess answers.” This fact ignores the reality that all students are in the same boat, with strong students having a better opportunity to demonstrate what they know.

As described by the officials, the new test requires students to answer follow-up questions and perform a task that shows their research and problem-solving skills. Nice as this sounds, reality is that it makes the mathematics tests far more verbal. Any student with weak reading and writing skills is unfairly assessed. That is especially problematic for English learners.

Low socio-economic Latino kids will be further burdened in demonstrating their mathematics competence, and Chinese or Korean immigrants who are a couple of years ahead mathematically (as was my daughter-in-law when she immigrated as a fifth-grader from Korea) will be told their mathematics competence is deficient. Absolutely absurd. Mathematics carried her for a couple of years until her English became good enough for academic work in other subjects.

The Common Core math standards, and the misguided philosophy of mathematics education behind them, are the heart of the problem. The new assessments simply reflect them. They say mathematics is best learned through students’ exploration of lengthy “real world” problems rather than the artificial setting of a competent teacher teaching a concept followed by straightforward applications thereof.


Stephen Colbert reports on Common Core confusion


Reality is that traditional (albeit contrived) word problems lead to better retention and use of the mathematics involved. Comparison with the highly effective Singapore Primary Math Series is illustrative.

Another misconception of teachers and assessment “experts” is that Common Core expects students to use nonstandard arithmetic algorithms. These are often used in place of the familiar ones; e.g., borrow/carry in subtraction/addition and vertical multiplication with its place-value shift with successive digits. Stephen Colbert’s delightful derision, which you can find by googling Colbert and Common Core, provides an example of that parental frustration.

Hard as it is to believe, one of the top three guides for the national math standards, and the sole guide for California’s new exams from the Smarter Balanced Assessment Consortium, has no degree in mathematics; his degree is in English literature.

Moreover, both the corresponding curricula and these less meaningful assessments are exactly what the Math Wars of the 1990s were about.The former standards that came out in late 1997 were written by a subgroup of the Stanford mathematics faculty and were based on the goal of making eighth-grade algebra a realistic opportunity for all California students, not just those whose parents can afford a good private school.

The idea that the Common Core standards and associated assessments are more rigorous and provide greater opportunities for California students is based on ignorance or, worse, is completely disingenuous.

It makes the mathematics tests far more verbal. Any student with weak reading and writing skills is unfairly assessed. That is especially problematic for English learners.

 

Wayne Bishop is a professor of mathematics at Cal State Los Angeles.

*Originally published in the San Gabriel Valley [Los Angeles] Tribune, 2 September, 2016

Posted in Common Core, Education Fraud, Education policy, Education Reform, K-12, Testing/Assessment, Wayne Bishop | Tagged , , , , , , , , , | 1 Comment

John Hopkins flawed report on Kentucky

It looks like a recent, very problematic report from Johns Hopkins University, “For All Kids, How Kentucky is Closing the High School Graduation Gap for Low-Income Students,” is likely to get pushed well beyond the Bluegrass State’s borders.

The publishers just announced a webinar on this report for August 30th.

Anyway, you need to get up to speed on why this report is build on a foundation of sand. You can do that fairly quickly by checking these blogs:

News release: The uneven quality of Kentucky’s high school diplomas

More on the quality control problems with Kentucky’s high school diplomas – Part 1

A third blog will release at 8 am Eastern tomorrow. It will probably link at

More on the quality control problems with Kentucky’s high school diplomas – Part 2

I won’t know for sure until it releases, however.

Let me know if you have questions and especially if this Hopkins report starts making the rounds in your state.

Posted in College prep, Common Core, Education journalism, Education policy, Education Reform, K-12, research ethics, Richard Innes | Tagged , , , , , | Leave a comment

101 Terms for Denigrating Others’ Research

In scholarly terms, a review of the literature or literature review is a summation of the previous research conducted on a particular topic. With a dismissive literature review, a researcher assures the public that no one has yet studied a topic or that very little has been done on it. Dismissive reviews can be accurate, for example with genuinely new scientific discoveries or technical inventions. But, often, and perhaps usually, they are not.

A recent article in the Nonpartisan Education Review includes hundreds of statements—dismissive reviews—of some prominent education policy researchers.* Most of their statements are inaccurate; perhaps all of them are misleading.

“Dismissive review”, however, is the general term. In the “type” column of the files linked to the article, a finer distinction is made among simply “dismissive”—meaning a claim that there is no or little previous research, “denigrating”—meaning a claim that previous research exists but is so inferior it is not worth even citing, and “firstness”—a claim to be the first in the history of the world to ever conduct such a study. Of course, not citing previous work has profound advantages, not least of which is freeing up the substantial amount of time that a proper literature review requires.

By way of illustrating the alacrity with which some researchers dismiss others’ research as not worth looking for, I list the many terms marshaled for the “denigration” effort in the table below. I suspect that in many cases, the dismissive researcher has not even bothered to look for previous research on the topic at hand, outside his or her small circle of colleagues.

Regardless, the effect of the dismissal, particularly when coming from a highly influential researcher, is to discourage searches for others’ work, and thus draw more attention to the dismisser. One might say that “the beauty” of a dismissive review is that rival researchers are not cited, referenced, or even identified, thus precluding the possibility of a time-consuming and potentially embarrassing debate.

Just among the bunch of high-profile researchers featured in the Nonpartisan Education Review article, one finds hundreds of denigrating terms employed to discourage the public, press, and policymakers from searching for the work done by others. Some in-context examples:

  • “The shortcomings of [earlier] studies make it difficult to determine…”
  • “What we don’t know: what is the net effect on student achievement?
    -Weak research designs, weaker data
    -Some evidence of inconsistent, modest effects
    Reason: grossly inadequate research and evaluation”
  • “Nearly 20 years later, the debate … remains much the same, consisting primarily of opinion and speculation…. A lack of solid empirical research has allowed the controversy to continue unchecked by evidence or experience…”

To consolidate the mass of verbiage somewhat, I group similar terms in the table below.

(Frequency)   Denigrating terms used for other research
(43)   [not] ‘systematic’; ‘aligned’; ‘detailed’; ‘comprehensive’; ‘large-scale’; ‘cross-state’; ‘sustained’; ‘thorough’
(31)    [not] ‘empirical’; ‘research-based’; ‘scholarly’
(29)   ‘limited’; ‘selective’; ‘oblique’; ‘mixed’; ‘unexplored’
(19)   ‘small’; ‘scant’; ‘sparse’; ‘narrow’; ‘scarce’; ‘thin’; ‘lack of’; ‘handful’; ‘little’; ‘meager’; ‘small set’; ‘narrow focus’
(15)   [not] ‘hard’; ‘solid’; ‘strong’; ‘serious’; ‘definitive’; ‘explicit’; ‘precise’
(14)   ‘weak’; ‘weaker’; ‘challenged’; ‘crude’; ‘flawed’; ‘futile’
(9)    ‘anecdotal’; ‘theoretical’; ‘journalistic’; ‘assumptions’; ‘guesswork’; ‘opinion’; ‘speculation’; ‘biased’; ‘exaggerated’
(8)    [not] ‘rigorous’
(8)    [not] ‘credible’; ‘compelling’; ‘adequate’; ‘reliable’; ‘convincing’; ‘consensus’; ‘verified’
(7)    ‘inadequate’; ‘poor’; ‘shortcomings’; ‘naïve’; ‘major deficiencies’; ‘futile’; ‘minimal standards of evidence’
(5)    [not] ‘careful’; ‘consistent’; ‘reliable’; ‘relevant’; ‘actual’
(4)    [not] ‘clear’; ‘direct’
(4)    [not] ‘high quality’; ‘acceptable quality’; ‘state of the art’
(4)    [not] ‘current’; ‘recent’; ‘up to date’; ‘kept pace’
(4)    ‘statistical shortcomings’; ‘methodological deficiencies’; ‘individual student data, followed school to school’; ‘distorted’
(2)    [not] ‘independent’; ‘diverse’

As well as illustrating the facility with which some researchers denigrate the work of rivals, the table summary also illustrates how easy it is. Hundreds of terms stand ready for dismissing entire research literatures. Moreover, if others’ research must satisfy the hundreds of sometimes-contradictory characteristics used above simply to merit acknowledgement, it is not surprising that so many of the studies undertaken by these influential researchers are touted as the first of a kind.

* Phelps, R.P. (2016). Dismissive reviews in education policy research: A list. Nonpartisan Education Review/Resources/DismissiveList.htm
http://nonpartisaneducation.org/Review/Resources/DismissiveList.htm

Posted in Censorship, Education journalism, Education policy, information suppression, research ethics, Richard P. Phelps | Tagged , , , , , , , | Leave a comment

‘One size fits all’ national tests not deeper or more rigorous

http://www.educationnews.org/education-policy-and-politics/one-size-fits-all-national-tests-not-deeper-or-more-rigorous/

Some say that now is a wonderful time to be a psychometrician — a testing and measurement professional. There are jobs aplenty, with high pay and great benefits. Work is available in the private sector at test development firms; in recruiting, hiring, and placement for corporations; in public education agencies at all levels of government; in research and teaching at universities; in consulting; and many other spots.

Moreover, there exist abundant opportunities to work with new, innovative, “cutting edge”, methods, techniques, and technologies. The old, fuddy-duddy, paper-and-pencil tests with their familiar multiple-choice, short-answer, and essay questions are being replaced by new-fangled computer-based, internet-connected tests with graphical interfaces and interactive test item formats.

In educational testing, the Common Core Standards Initiative (CCSI), and its associated tests, developed by the Smarter-Balanced Assessment Consortium (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC), has encouraged the movement toward “21st century assessments”. Much of the torrential rain of funding, burst forth from federal and state governments and clouds of wealthy foundations, has pooled in the pockets of psychometricians.

At the same time, however, the country’s most authoritative psychometricians—the very people who would otherwise have been available to guide, or caution against, the transition to the newer standards and tests—have been co-opted. In some fashion or another, they now work for the CCSI. Some work for the SBAC or PARCC consortia directly, some work for one or more of the many test development firms hired by the consortia, some help the CCSI in other capacities. Likely, they have all signed confidentiality agreements (i.e., “gag orders”).

Psychometricians who once had been very active in online chat rooms or other types of open discussion forums on assessment policy no longer are, unless to proffer canned promotions for the CCSI entities they now work for. They are being paid well. They may be doing work they find new, interesting, and exciting. But, with their loss of independence, society has lost perspective.

Perhaps the easiest vantage point from which to see this loss of perspective is in the decline of adherence to test development quality standards, those that prescribe the behavior of testing and measurement professionals themselves. Over the past decade, for example, the International Test Commission (ITC) alone has developed several sets of standards.

Perhaps the oldest set of test quality standards was established originally by the American Psychological Association (APA) and was updated most recently in 2014—the Standards for Educational and Psychological Testing (AERA, NCME, APA). It contains hundreds of individual standards. The CCSI as a whole, and the SBAC and PARCC tests in particular, fail to meet many of them.

The problem starts with what many professionals consider the testing field’s “prime directive”—Standard 1.0 (AERA, NCME, APA, p.23). It reads as follows:

“Clear articulation of each intended test score interpretation for a specified use should be set forth, and appropriate validity evidence in support of each intended interpretation should be provided.”

That is, a test should be validated for each purpose for which it is intended to be used before it is used for that purpose. Before it is used to make important decisions. And, before it is advertised as serving that purpose.

Just as states were required by the Race to the Top competition for federal funds to accept Common Core standards before they had even been written, CCSI proponents have boasted about their new consortium tests’ wonderful benefits since before test development even began. They claimed unproven qualities about then non-existent tests because most CCSI proponents do not understand testing, or they are paid not to understand.

In two fundamental respects, the PARCC and SBAC tests will never match their boosters’ claims nor meet basic accepted test development standards. First, single tests are promised to measure readiness for too many and too disparate outcomes—college and careers—that is, all possible futures. It is implied that PARCC and SBAC will predict success in art, science, plumbing, nursing, carpentry, politics, law enforcement …any future one might wish for.

This is not how it is done in educational systems that manage multiple career pathways well. There, in Germany, Switzerland, Japan, Korea, and, unfortunately, few jurisdictions in the U.S., a range of different types of tests are administered, each appropriately designed for their target professions. Aspiring plumbers take plumbing tests. Aspiring medical workers take medical tests. And, those who wish to prepare for more advanced degrees might take more general tests that predict their aptitude to succeed in higher education institutions.

But that isn’t all. SBAC and PARCC are said to be aligned to the K-12 Common Core standards, too. That is, they both summarize mastery of past learning and predict future success. One test purports to measure how well students have done in high school, and how well they will do in either the workplace or in college, three distinctly different environments, and two distinctly different time periods.

PARCC and SBAC are being sold as replacements for state high school exit exams, for 4-year college admission tests (e.g., the SAT and ACT), for community college admission tests (e.g., COMPASS and ACCUPLACER), and for vocational aptitude tests (e.g., ASVAB). Problem is, these are very different types of tests. High school exit exams are generally not designed to measure readiness for future activity but, rather, to measure how well students have learned what they were taught in elementary and secondary schools. We have high school exit exams because citizens believe it important for their children to have learned what is taught there. Learning Civics well in high school, for example, may not correlate highly with how well a student does in college or career but many nonetheless consider it important for our republic that its citizens learn the topic

High school exit exams are validated by their alignment with the high school curriculum, or content standards. By contrast, admission or aptitude tests are validated by their correlation with desired future outcomes—grades, persistence, productivity, and the like in college—their predictive validity. In their pure, optimal forms, a high school exit exam, a college admission test, and vocational aptitude tests bear only a slight resemblance to each other. They are different tests because they have different purposes and, consequently, require different validations.

————

Let’s assume for the moment that the Common Core consortia tests, PARCC and SBAC, can validly measure all that is claimed for them—mastery of the high school curriculum and success in further education and in the workplace. The fact is no evidence has yet been produced that verifies any of these things. And, remember, the proof of, and the claims about, a new test’s virtues are supposed to be provided before the test is used purposefully.

Sure, Common Core proponents claim to have just recently validated their consortia tests for correlation with college outcomes , for alignment with elementary and secondary school content standards, and for technical quality . The clumsy studies they cite do not match the claims made for them, however.

SBAC and PARCC cannot be validated for their purpose of predicting college and career readiness until data are collected in the years to come on the college and career outcomes of those who have taken the tests in high school. The study cited by Common Core proponents uses the words “predictive validity” in its title. Only in the fine print does one discover that, at best, the study measured “concurrent” validity—high school tests were administered to current rising college sophomores and compared to their freshman-year college grades. Calling that “predictive validity” is, frankly, dishonest.

It might seem less of a stretch to validate SBAC and PARCC as high school exit exam replacements. After all, supposedly they are aligned to the Common Core Standards so in any jurisdiction where the Common Core Standards prevail, they would be retrospectively aligned to the high school curriculum. Two issues tarnish this rosy picture. First, the Common Core Standards are subjectively narrow, just mathematics and English Language Arts, with no attention paid to the majority of the high school curriculum.

Second, common adherence to the Common Core Standards across the States has deteriorated to the point of dissolution. As the Common Core consortia’s grip on compliance (i.e., alignment) continues to loosen, states, districts within states, and schools within districts are teaching how they want and what they want. The less aligned Common Core Standards become, the less valid the consortium tests become as measures of past learning.

As for technical quality, the Fordham Institute, which is paid handsomely by the Bill & Melinda Gates Foundation to promote Common Core and its consortia tests, published a report which purports to be an “independent” comparative standards alignment study. Among its several fatal flaws: instead of evaluating tests according to the industry standard Standards for Educational and Psychological Testing, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employed “a brand new methodology” specifically developed for Common Core and its copyright owners, and paid for by Common Core’s funders.

Though Common Core consortia test sales pitches may be the most disingenuous, SAT and ACT spokespersons haven’t been completely forthright either. To those concerned about the inevitable degradation of predictive validity if their tests are truly aligned to the K-12 Common Core standards, public relations staffs assure us that predictive validity is a foremost consideration. To those concerned about the inevitable loss of alignment to the Common Core standards if predictive validity is optimized, they assure complete alignment.

So, all four of the test organizations have been muddling the issue. It is difficult to know what we are going to get with any of the four tests. They are all straddling or avoiding questions about the trade-offs. Indeed, we may end up with four, roughly equivalent, muddling tests, none of which serve any of their intended purposes well.

This is not progress. We should want separate tests, each optimized for a different purpose, be it measuring high school subject mastery, or predicting success in 4-year college, in 2-year college, or in a skilled trade. Instead, we may be getting several one-size-fits-all, watered-down tests that claim to do all but, as a consequence, do nothing well. Instead of a skilled tradesperson’s complete tool set, we may be getting four Swiss army knives with roughly the same features. Instead of exploiting psychometricians’ advanced knowledge and skills to optimize three or more very different types of measurements, we seem to be reducing all of our nationally normed end-of-high-school tests to a common, generic muddle.

————

References

McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

Nichols-Barrer, I., Place, K., Dillon, E., & Gill, B. (2015, October 5). Final Report: Predictive Validity of MCAS and PARCC: Comparing 10th Grade MCAS Tests to PARCC Integrated Math II, Algebra II, and 10th Grade English Language Arts Tests. Cambridge, MA: Mathematica Policy Research. http://econpapers.repec.org/paper/mprmprres/a2d9543914654aa5b012e4a6d2dae060.htm

Phelps, R.P. (2016, February). Fordham Institute’s pretend research. Policy Brief. Boston: Pioneer Institute. http://pioneerinstitute.org/featured/fordhams-parcc-mcas-report-falls-short/
Reference

American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME). (2014). Standards for Educational and Psychological Testing, Washington, DC: AERA.

Posted in College prep, Common Core, Education policy, Education Reform, K-12, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , , , , , , , , , | Leave a comment

Some Common Core Salespersons’ Salaries: DC Edu-Blob-ulants

Linked are copies of Form 990s for Marc Tucker’s National Center for Education and the Economy (NCEE), Checker Finn’s Fordham Foundation and Fordham Institute, and Bob Wise’s Alliance for Excellent Education (AEE). Each pays himself and at least one other well.

All non-profit organizations with revenues exceeding $50,000 must file Form 990s annually with the Internal Revenue Service. And, in return for the non-profits’ tax-exempt status, their Form 990s are publicly available.

As to salaries…

National Center for Education and the Economy
NCEE2013Form990 – Marc Tucker pays himself $501,087, and six others receive from $162k to $379k (p.40 of 48); his son, Joshua Tucker receives $214,813 (p. 42)
…also interesting: p.16 (contrast with p.15), pp. 19, 27, 37

Alliance for Excellent Education
AEE2013Form990 – Bob Wise pays himself $384,325, and six others receive from $162k to $227k. (see p.27 of 36)
…also interesting: p.24 (“Madoff Recovery”)

Thomas B. Fordham Foundation & Institute
FordhamF2013Form990 & FordhamI2013Form990 – With both a “Foundation” and an “Institute”, Checker Finn and Mike Petrilli can each pay themselves about $100k, twice. (see p.25 of 42)
…also interesting: p.19 ($29million in investments; $1.5million for an interest rate swap); p.37 (particularly the two entries for “Common Sense Offshore, Ltd.)

Posted in Common Core, Education policy, Education Reform, Ethics, research ethics | Tagged , , , , , , , , , , , , , , | Leave a comment

Censorship at Education Next

In response to their recent misleading articles about a fall 2015 Mathematica report that claims to (but does not) find predictive validity for the PARCC test with Massachusetts college students, I wrote the text below and submitted it to EdNext as a comment to the article. The publication neither published my comment nor provided any explanation. Indeed, the comments section appears to have vanished entirely.

http://educationnext.org/testing-college-readiness-massachusetts-parcc-mcas-standardized-tests/

“First, the report attempts to calculate only general predictive validity. The type of predictive validity that matters is “incremental predictive validity”—the amount of predictive power left over when other predictive factors are controlled. If a readiness test is highly correlated with high school grades or class rank, it provides the college admission counselor no additional information. It adds no value. The real value of the SAT or ACT is in the information it provides admission counselors above and beyond what they already know from other measures available to them.

“Second, the study administered grade 10 MCAS and PARCC tests to college students at the end of their freshmen years in college, and compared those scores to their first-year grades in college. Thus, the study measures what students learned in one year of college and in their last two years of high school more than it measures what they knew as of grade 10. The study does not actually compute predictive validity; it computes “concurrent” validity.

“Third, student test-takers were not representative of Massachusetts tenth graders. All were volunteers; and we do not know how they learned about the study or why they chose to participate. Students not going to college, not going to college in Massachusetts, or not going to these colleges in Massachusetts could not have participated. The top colleges—where the SAT would have been most predictive—were not included in the study (e.g., U. Mass-Amherst, any private college, or elite colleges outside the state). Students not going to college, or attending occupational certificate training programs or apprenticeships–for whom one would suspect the MCAS would be most predictive–were not included in the study.”

Posted in Censorship, College prep, Common Core, Education journalism, Ethics, information suppression, K-12, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , | Leave a comment

Hard Work by Students

In my ten years of HS teaching I saw good (hard-working, interested in learning) students do well with good teachers, and ALSO do pretty well with poor teachers…

I saw poor (not working, not interested in learning) students do poorly with poor teachers and ALSO do poorly with good teachers….

From this I have derived my Brilliant Insight:

The most important variable in student academic achievement is student academic work.” (not teacher quality, although that can make some difference)…

This Insight gains no traction, in spite of its obvious truth, I think, because ED Leaders, Pundits, Planners, Designers, etc., believe they are helpless to increase student interest in doing academic work.

So they Plan, Lead, Design, Critique, and so on, and the issue of student academic work just gets no attention.

This all seems too simple-minded, of course, but an increase in student academic work is guaranteed to improve student academic achievement (in every group) and it is irresponsible to ignore it, however difficult it may be to influence.

Posted in Education policy, Education Reform, research ethics, Will Fitzhugh | Tagged , , , , , , | Leave a comment

The Education Writers Association casts its narrowing gaze on Boston, May 1-3

The Education Writers Association casts its narrowing gaze on Boston, May 1-3

Billions have been spent, and continue to be spent, promoting the Common Core Standards and their associated consortium tests, PARCC and SBAC. Nonetheless, the “Initiative” has been stopped in its tracks largely by a loose coalition of unpaid grassroots activists. That barely-organized amateurs could match the many well-organized, well-paid professional organizations, tells us something about Common Core’s natural appeal, or lack thereof. Absent the injection of huge amounts of money and political mandates, there would be no Common Core.

The Common Core Initiative (CCI) does not progress, but neither does it go away. Its alleged primary benefit—alignment both within and across states (allegedly producing valid cross-state comparisons)—continues to degrade as participating states make changes that suit them. The degree of Common Core adoption varies greatly from state to state, and politicians’ claims about the degree of adoption even more so. CCI is making a mess and will leave a mess behind that will take years to clean up.

How did we arrive in this morass? Many would agree that our policymakers have failed us. Politicians on both sides of the aisle naively believed CCI’s “higher, deeper, tougher, more rigorous” hype without making any effort to verify the assertions. But, I would argue that the corps of national education journalists is just as responsible.

Too many of our country’s most influential journalists accept and repeat verbatim the advertising slogans and talking points of Common Core promoters. Too many of their stories source information from only one side of the issue. Most annoying, to those of us eager for some journalistic balance, has been some journalists’ tendency to rely on Common Core promoters to identify the characteristics and explain the motives of Common Core opponents.

An organization claiming to represent and support all US education journalists sets up shop in Boston next week for its annual “National Seminar”. The Education Writers Association’s (EWA’s) national seminars introduce thousands of journalists to sources of information and expertise. Many sessions feature journalists talking with other journalists. Some sessions host teachers, students, or administrators in “reports from the front lines” type panel discussions. But, the remaining and most ballyhooed sessions feature non-journalist experts on education policy fronting panels with, typically, a journalist or two hosting. Allegedly, these sessions interpret “all the research”, and deliver truth, from the smartest, most enlightened on earth.

Given its central role, and the profession it represents, one would expect diligence from EWA in representing all sides and evidence. Indeed, EWA claims a central purpose “to help journalists get the story right.”

Rummaging around EWA’s web site can be revealing. I located website material classified under their “Common Core” heading: 192 entries overall, including 6 EWA Radio broadcast transcripts, links to 19 research or policy reports, 1 “Story Lab”, 8 descriptions of and links to organizations useful for reporters to know, 5 seminar and 3 webinar agendas, 11 links to reporters’ stories, and 42 links to relevant multimedia presentations.

I was interested to learn the who, what, where, and how of EWA sourcing of education research and policy expertise. In reviewing the mass of material the EWA classifies under Common Core, then, I removed that which was provided by reporters and ignored that which was obviously purely informational, provided it was unbiased (e.g., non-interpretive reporting of poll results, thorough listing of relevant legislative actions). What remains is a formidable mass of material—in the form of reports, testimonies, interviews, essays, seminar and webinar transcripts, and so on.

So, whom does the EWA rely on for education policy expertise “to help journalists get the story right”? Which experts do they invite to their seminars and webinars? Whose reports and essays do they link to? Whose interviews do they link to or post? Remember, journalists are trained to represent all sides to each story, to summarize all the evidence available to the public.

That’s not how it works at the Education Writers Association, however. Over the past several years, EWA has provided speaking and writing platforms for 102 avowed Common Core advocates, 7 avowed Common Core opponents, 12 who are mostly in favor, and one who is mostly opposed.[1] Randomly select an EWA Common Core “expert” from the EWA website, and the odds exceed ten to one the person will be an advocate and, more than likely, a paid promoter.

Included among the 102 Common Core advocates for whom the EWA provided a platform to speak or write, are officials from the “core” Common Core organizations, the Council of Chief State School Officers (CCSSO), the National Governors Association (NGA), the Partnership for Assessment of Readiness for College and Careers (PARCC), and the Smarter-Balanced Assessment Consortium (SBAC). Also included are representatives from research and advocacy organizations paid by the Bill and Melinda Gates Foundation and other funding sources to promote the Common Core Standards and tests: the Thomas P. Fordham Institute, the New America Foundation, the Center for American Progress, the Center on Education Policy, and the Business Roundtable. Moreover, one finds ample representation in EWA venues of organizations directly profiting from PARCC and SBAC test development activity, such as the Center for Assessment, WestEd, the Rand Corporation, and professors from the Universities of North Carolina and Illinois, Harvard and Stanford Universities, UCLA, Michigan State, and Southern Cal (USC).

Most of the small contingent of Common Core opponents does not oppose the Common Core initiative, standards, or tests per se but rather tests in general, or the current quantity of tests. Among the seven attributions to avowed opponents, three are to the National Center for Fair and Open Testing (a.k.a., FairTest), an organization that opposes all meaningful standards and assessments, not just Common Core.

The seven opponents comprise one extreme advocacy group, a lieutenant governor, one local education administrator, an education graduate student, and another advocacy group called Defending the Early years, which argues that the grades K–2 Common Core Standards are age-inappropriate (i.e., too difficult). No think tank analysts. No professors. No celebrities.

Presumably, this configuration of evidence and points of view represents reality as the leaders of EWA see it (or choose to see it): 102 in favor and 7 opposed; several dozen PhDs from the nation’s most prestigious universities and think tanks in favor and 7 fringe elements opposed. Accept this as reality and pro-CCI propaganda characterizations of their opponents might seem reasonable. Those in favor of CCI are prestigious, knowledgeable, trustworthy authorities. Those opposed are narrow minded, self-interested, uninformed, inexpert, or afraid of “higher, deeper, tougher, more rigorous” standards and tests. Those in favor of CCI want progress; those opposed do not.

In a dedicated website section, EWA describes and links to eight organizations purported to be good sources for stories on the Common Core. Among them are the core CCI organizations Achieve, CCSSO, NGA, PARCC, and SBAC; and the paid CC promoters, the Fordham Institute. The only opposing organization suggested? — FairTest.

There remain two of the EWA’s favorite information sources, the American Enterprise Institute (AEI) and the American Federation of Teachers (AFT) that I have categorized as mostly pro-CCI. Both received funding from the Gates Foundation early on to promote the Initiative. When the tide of public opinion began to turn against the Common Core, however, both organizations began shuffling their stance and straddling their initial positions. Each has since adopted the “Common Core is a great idea, but it has been poorly implemented” theme.

So, what of the great multitude who desire genuinely higher standards and consequential tests and recognize that CCI brings neither? …who believe Common Core was never a good idea, never made any sense, and should be completely dismantled? Across several years, categories and types of EWA coverage, one finds barely a trace of representation.

The representation of research and policy expertise at EWA national seminars reflects that at its website. Keynote speakers include major CCI advocates College Board President David Coleman (twice), US Education Secretary Arne Duncan (twice), Secretary John King, Governor Bill Haslam, and “mostly pro” AFT President Randi Weingarten, along with the unsure Governor Charlie Baker. No CCI opponents.

Among other speakers presented as experts in CCI related sessions at the Nashville Seminar two years ago were 14 avowed CCI advocates[2], one of the “mostly pro” variety, and one critic, local education administrator Carol Burris. At least ten of the 14 pro-CCI experts have worked directly in CCI-funded endeavors. Last year’s Chicago Seminar featured nine CCI advocates[3] and one opponent, Robert Schaeffer of FairTest. Five of the nine advocates have worked directly in CCI-funded endeavors.

In addition to Secretary John King’s keynote, this year’s Boston Seminar features a whopping 16 avowed CCI proponents, two of the “mostly pro” persuasion, and one opponent, Linda Hanson, a local area educator and union rep. At least ten of the 16 proponents have worked in CCI-funded activities.

One session entitled “The Massachusetts Story” might have invited some of those responsible for the rise of the Commonwealth from a middling performer twenty years ago to nation’s academic leader ten years ago (some of whom feel rather upset with the Commonwealth’s adoption of Common Core Standards in 2010). Sandy Stotsky, for example, wrote many of the English Language Standards in the 1990s, might be the country’s most prolific writer on CCI issues, and lives in Boston. Instead, EWA invited three after-the-fact regional leaders who promote the CCI.

In general, some of EWA’s most called-upon experts work in think tanks. EWA loves think tanks. While in Chicago, they could have invited scholars affiliated with the Heartland Institute, a staunch opponent of the CCI. But, they didn’t. For the Boston meeting, they could have invited scholars affiliated with the Pioneer Institute (e.g., Sandy Stotsky and R. James Milgram, both of whom served on the CCI’s evaluation committee); Pioneer is arguably the country’s leading source of scholarly opposition to the CCI. But, they haven’t.

Turns out, the only think tanks that matter in EWA’s judgment are national think tanks. Not being located in Washington, DC, Heartland and Pioneer might be considered “regional” think tanks, despite all the effort they put into national issues. Instead of inviting locally-based think tankers opposed to the CCI in Chicago and Boston, EWA preferred to fly CCI think tank advocates out from DC.

For the “reform” side of education issues, in general, EWA invitations appear stuck inside a tight little circle. EWA frequently calls upon Harvard-affiliated folk (e.g., Chingos, Ferguson, Fryer, Hess, Ho, Kane, Long, Loveless, Mehta, Putnam, Reville, Rhee, Sahlberg, Schwartz, West). EWA is also quite fond of anyone who has worked for Chester “Checker” Finn (e.g., Petrilli, Pondiscio, Northern, Smarick, Brickman, and Polikoff).

There are many thousands of education researchers in the world, thousands of higher education institutions, and hundreds of relevant research journals. But the EWA has chosen to rely almost exclusively on an infinitesimal proportion of it for expertise. Ironically, the tiny group on which they depend comprises some of the world’s most poorly read and censorious researchers.[4]

EWA likes the Fordham Institute especially well. Within the past few years, EWA has conferred upon Fordham an EWA best web site award and, to Fordham’s Robert Pondiscio, a National Award for Education Reporting in the “Education Organizations and Experts” category. Fordham and Pondiscio accepted their awards in Nashville.

Several possible explanations for the Education Writers Association expertise sourcing myopia come to mind, such as a lack of resources, convenience, naïveté, passivity (e.g., expecting experts to contact them rather than looking for them), and an irresistible attraction to money and power (e.g., EWA sponsors seem very well represented at EWA venues). But, chief among them, to my observation, are elitism and a wholesale conflation of celebrity for expertise. Far too often, the EWA features “expert” opinion from someone who is well known as a commentator on education policy generally (or, at least, well known generally) but who knows next to nothing about the topic at hand.

At EWA seminars, whether national, regional, or topical, one observes an effort to make good use of local education researchers and university professors, but not just any. There are several universities in Tennessee, but Vanderbilt professors overwhelmed the agenda at EWA’s Nashville meeting. Likewise, there exist many universities in the Chicago area, but EWA preferred to invite those from the University of Chicago and Northwestern, the two most elite. Boston University is hosting next week’s Boston meeting, and several of its academics will be involved in session panels. But, twice as many will come from Harvard.

In a variety of ways, the Education Writers Association functions to centralize expertise sourcing. If there were no EWA, the thousands of education journalists who attend their seminars would initiate all their expertise sourcing on their own. The result, in the absence of EWA’s suggestions, would be a much wider variety of expertise sourced. And, the US populace would much better informed.

The EWA is run by education journalists with national ambitions. Through efforts such as the EWA Seminars, the national group imposes its bias toward Washington, DC power and celebrity on its thousands of members. As a result, it serves not as muckraker or spokespersons for the less powerful, but largely to boost the public relations push of the wealthy established interests.

Could all this just be sour grapes? After all, right there on its web site EWA offers in large, bold letters “Opportunities for Exposure”. If one is dissatisfied with the status quo, why not take them up on their offer? The body of the text reads “Sponsorship, Exhibition, & Advertising Available Now!”. Oh, right, that’s why.

 

Endnotes

[1] Not counting the few sources delivering neutral information, nor the “reports from the front lines” panels of teachers and school administrators (most of whom, at EWA meetings, appear to support the CCI).

[2] Michael Cohen (Achieve), Terry Holiday (Commonwealth of Kentucky), Jamie Woodson (TN SCORE), Dennis Van Roekel (NEA), Amber Northern (Fordham Institute), William Schmidt (Michigan State U), Sandra Alberti (Student Achievement Partners), Jacqueline King (SBAC), Laura Slover (PARCC), Tommy Bice (State of Alabama), Kristen DiCerbo (Pearson Inc.), Kevin Huffman (TN DOE), Lisa Guernsey (New America Foundation), and Robert Pondiscio (Education Next, Fordham Institute)

[3] Morgan Polikoff (USC, Fordham), Andy Isaacs (Everyday Math, U. Chicago), Dana Cartier (IL Center for School Improvement), Diane Briars (NCTM), Matt Chingos (Brookings), Scott Marion (Center for Assessment), Chris Minnich (CCSSO), James Pellegrino (U. Illinois-Chicago), and Andrew Latham (WestEd).

[4] http://nonpartisaneducation.org/Review/Resources/DismissiveList.htm

 

Posted in Common Core, Education journalism, Education policy, Education Reform, Education Writers Association, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , , , , , | Leave a comment

PEISCH SAYS REPEALING COMMON CORE WOULD BE “HUGE MISTAKE”

It seems that some Massachusetts representatives don’t think that parents, teachers, and administrators should be allowed to vote on a secret ballot whether they want to keep Common Core’s inferior standards or return to the state’s superior standards junked by its state board of education in July 2010. Why does this state representative think that it is better for Bay State schools to address standards written in 2009 in Washington, DC, by unqualified people, funded chiefly by the Bill and Melinda Gates Foundation? Here is a State House News reporter’s April 26 account of how some Beacon Hill legislators think about the ballot question to end Common Core in Massachusetts.

By Andy Metzger
STATE HOUSE NEWS SERVICE
STATE HOUSE, BOSTON, APRIL 26, 2016…..At odds over the future of charter schools in Massachusetts, the co-chairwomen of the Education Committee may be more closely aligned on a proposal to revert state curriculum standards to their prior iteration.

The proposal to restore education standards in place before Massachusetts adopted Common Core in 2010 appears headed for the ballot, as does a citizens initiative to increase charter school enrollment.
Rep. Alice Peisch, a Wellesley Democrat and House chairwoman of the Education Committee, said the Common Core repeal would be a mistake. Her co-chairwoman on the committee, Sen. Sonia Chang-Diaz said she is disinclined to vote for the proposal, but hasn’t yet staked out a position.
“If that ballot question were to pass, that is six years of work that will be irrelevant,” Peisch told members of local school committees on Tuesday. She said, “I think it would be a huge mistake for a ballot question to determine what students learn.”

Sandra Stotsky, who helped draft the old “first class” Massachusetts standards, told the News Service pulling the Bay State out of Common Core would stop the “damage” caused by the multi-state standard.

“The ballot question says let’s go back to the standards we know worked,” said Stotsky, who said Common Core includes “nonsense statements.”

The referendum would reverse a move by the Board of Elementary and Secondary Education taken in 2010 and restore the prior frameworks and establish new processes for developing curriculum frameworks.

Peisch said the state’s teachers “have all been trained in the new standards” and the state is going out to bid for a new assessment – dubbed MCAS 2.0 – “that will be aligned with the standards.”
A former senior associate commissioner in the state’s education department, Stotsky said the Common Core standards are “unteachable.”

“They’re unteachable in that they require skills that kids don’t have and that teachers can’t easily teach,” Stotsky told the News Service.

Speaking at the Tuesday event organized by the Massachusetts Association of School Committees, Chang-Diaz, a Jamaica Plain Democrat and co-chairwoman of the Education Committee, had a more measured take on the proposal.

“For folks who are worried about losing self-determination as a state over our own curriculum frameworks, there’s nothing about the Common Core that prevents us from doing that,” Chang-Diaz said. She said she has “trouble understanding” the “content-based objection” to Common Core.

Chang-Diaz told the News Service she wanted to read the question before forming a conclusion.

“I haven’t read it yet, so I think I will not be voting for that ballot question, but I’m a stickler for reading things before I state a final position,” Chang-Diaz said. She said, “I don’t have to vote on that for a while.”

The Common Core repeal referendum (H 3929) is currently before the Education Committee, which had a hearing on it in March. Without action by the Legislature before May 4 – which appears unlikely – supporters of the move away from Common Core could collect another 10,792 signatures around the state to place the matter before voters.
-END-
4/26/2016

Posted in Common Core, Education policy, Education Reform, Ethics, K-12, Mathematics, Reading & Writing, Sandra Stotsky, Testing/Assessment | Tagged , , , | 1 Comment

Fordham Institute’s pretend research

The Thomas B. Fordham Institute has released a report, Evaluating the Content and Quality of Next Generation Assessments,[i] ostensibly an evaluative comparison of four testing programs, the Common Core-derived SBAC and PARCC, ACT’s Aspire, and the Commonwealth of Massachusetts’ MCAS.[ii] Of course, anyone familiar with Fordham’s past work knew beforehand which tests would win.

This latest Fordham Institute Common Core apologia is not so much research as a caricature of it.

  1. Instead of referencing a wide range of relevant research, Fordham references only friends from inside their echo chamber and others paid by the Common Core’s wealthy benefactors. But, they imply that they have covered a relevant and adequately wide range of sources.
  2. Instead of evaluating tests according to the industry standard Standards for Educational and Psychological Testing, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employ “a brand new methodology” specifically developed for Common Core, for the owners of the Common Core, and paid for by Common Core’s funders.
  3. Instead of suggesting as fact only that which has been rigorously evaluated and accepted as fact by skeptics, the authors continue the practice of Common Core salespeople of attributing benefits to their tests for which no evidence exists
  4. Instead of addressing any of the many sincere, profound critiques of their work, as confident and responsible researchers would do, the Fordham authors tell their critics to go away—“If you don’t care for the standards…you should probably ignore this study” (p. 4).
  5. Instead of writing in neutral language as real researchers do, the authors adopt the practice of coloring their language as so many Common Core salespeople do, attaching nice-sounding adjectives and adverbs to what serves their interest, and bad-sounding words to what does not.

1.  Common Core’s primary private financier, the Bill & Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the Core and its associated testing programs.[iii] A cursory search through the Gates Foundation web site reveals $3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or “general operating support.”[iv] Gates awarded an additional $653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio ($2,596,812), reputedly the nation’s worst.[v]

The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:

  • the Human Resources Research Organization (HumRRO),[vi]
  • the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the test evaluation “Criteria.”[vii]
  • the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of one of the federally-subsidized Common Core-aligned testing programs, the Smarter-Balanced Assessment Consortium (SBAC),[viii] and
  • Student Achievement Partners, the organization that claims to have inspired the Common Core standards[ix]

The Common Core’s grandees have always only hired their own well-subsidized grantees for evaluations of their products. The Buros Center for Testing at the University of Nebraska has conducted test reviews for decades, publishing many of them in its annual Mental Measurements Yearbook for the entire world to see, and critique. Indeed, Buros exists to conduct test reviews, and retains hundreds of the world’s brightest and most independent psychometricians on its reviewer roster. Why did Common Core’s funders not hire genuine professionals from Buros to evaluate PARCC and SBAC? The non-psychometricians at the Fordham Institute would seem a vastly inferior substitute, …that is, had the purpose genuinely been an objective evaluation.

2.  A second reason Fordham’s intentions are suspect rests with their choice of evaluation criteria. The “bible” of North American testing experts is the Standards for Educational and Psychological Testing, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.[x]

Had Fordham compared the tests using the Standards for Educational and Psychological Testing (or any of a number of other widely-respected test evaluation standards, guidelines, or protocols[xi]) SBAC and PARCC would have flunked. They have yet to accumulate some the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest they will fail on all three counts.[xii]

Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-owns the Common Core standards and co-sponsored their development (Council of Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond’s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others.[xiii],[xiv] Thus, Fordham compares SBAC and PARCC to other tests according to specifications that were designed for SBAC and PARCC.[xv]

The authors write “The quality and credibility of an evaluation of this type rests largely on the expertise and judgment of the individuals serving on the review panels” (p.12). A scan of the names of everyone in decision-making roles, however, reveals that Fordham relied on those they have hired before and whose decisions they could safely predict. Regardless, given the evaluation criteria employed, the outcome was foreordained regardless whom they hired to review, not unlike a rigged election in a dictatorship where voters’ decisions are restricted to already-chosen candidates.

Still, PARCC and SBAC might have flunked even if Fordham had compared tests using all 24+ of CCSSO’s “Criteria.” But Fordham chose to compare on only 14 of the criteria.[xvi] And those just happened to be criteria mostly favoring PARCC and SBAC.

Without exception the Fordham study avoided all the evaluation criteria in the categories:

“Meet overall assessment goals and ensure technical quality”,

“Yield valuable reports on student progress and performance”,

“Adhere to best practices in test administration”, and

“State specific criteria”[xvii]

What types of test characteristics can be found in these neglected categories? Test security, providing timely data to inform instruction, validity, reliability, score comparability across years, transparency of test design, requiring involvement of each state’s K-12 educators and institutions of higher education, and more. Other characteristics often claimed for PARCC and SBAC, without evidence, cannot even be found in the CCSSO criteria (e.g., internationally benchmarked, backward mapping from higher education standards, fairness).

The report does not evaluate the “quality” of tests, as its title suggests; at best it is an alignment study. And, naturally, one would expect the Common Core consortium tests to be more aligned to the Common Core than other tests. The only evaluative criteria used from the CCSSO’s Criteria are in the two categories “Align to Standards—English Language Arts” and “Align to Standards—Mathematics” and, even then, only for grades 5 and 8.

Nonetheless, the authors claim, “The methodology used in this study is highly comprehensive” (p. 74).

The authors of the Pioneer Institute’s report How PARCC’s false rigor stunts the academic growth of all students,[xviii] recommended strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also did not recommend continuing with the current MCAS, which is also based on Common Core’s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state’s mandated tests to a more responsible agency.

Perhaps the primary conceit of Common Core proponents is that the familiar multiple-choice/short answer/essay standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)[xix]. Ironically, it is they—opponents of traditional testing content and formats—who propose that standardized tests measure everything. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.

Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:

”Research: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.”[xx]

Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure “sustained research” in the one- or two-minute span of a standardized test question? In PARCC tests, this is simulated by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.

But, that is not how research works. It is hardly the type of deliberation that comes to most people’s mind when they think about “sustained research”. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; “sustained research” should be measured more authentically.

The authors of the aforementioned Pioneer Institute report recommend, as their 7th policy recommendation for Massachusetts:

“Establish a junior/senior-year interdisciplinary research paper requirement as part of the state’s graduation requirements—to be assessed at the local level following state guidelines—to prepare all students for authentic college writing.”[xxi]

PARCC, SBAC, and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two. It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC and SBAC tests “deeper” than others. In practice, the alleged deeper parts are the most convoluted and superficial.

Appendix A of the source document for the CCSSO criteria provides three international examples of “high-quality assessments” in Singapore, Australia, and England.[xxiii] None are standardized test components. Rather, all are projects developed over extended periods of time—weeks or months—as part of regular course requirements.

Common Core proponents scoured the globe to locate “international benchmark” examples of the type of convoluted (i.e., “higher”, “deeper”) test questions included in PARCC and SBAC tests. They found none.

3.  The authors continue the Common Core sales tendency of attributing benefits to their tests for which no evidence exists. For example, the Fordham report claims that SBAC and PARCC will:

“make traditional ‘test prep’ ineffective” (p. 8)

“allow students of all abilities, including both at-risk and high-achieving youngsters, to demonstrate what they know and can do” (p. 8)

produce “test scores that more accurately predict students’ readiness for entry-level coursework or training” (p. 11)

“reliably measure the essential skills and knowledge needed … to achieve college and career readiness by the end of high school” (p. 11)

“…accurately measure student progress toward college and career readiness; and provide valid data to inform teaching and learning.” (p. 3)

eliminate the problem of “students … forced to waste time and money on remedial coursework.” (p. 73)

help “educators [who] need and deserve good tests that honor their hard work and give useful feedback, which enables them to improve their craft and boost their students’ success.” (p. 73)

The Fordham Institute has not a shred of evidence to support any of these grandiose claims. They share more in common with carnival fortune telling than empirical research. Granted, most of the statements refer to future outcomes, which cannot be known with certainty. But, that just affirms how irresponsible it is to make such claims absent any evidence.

Furthermore, in most cases, past experience would suggest just the opposite of what Fordham asserts. Test prep is more, not less, likely to be effective with SBAC and PARCC tests because the test item formats are complex (or, convoluted), introducing more “construct irrelevant variance”—that is, students will get lower scores for not managing to figure out formats or computer operations issues, even if they know the subject matter of the test. Disadvantaged and at-risk students tend to be the most disadvantaged by complex formatting and new technology.

As for Common Core, SBAC, and PARCC eliminating the “problem of” college remedial courses, such will be done by simply cancelling remedial courses, whether or not they might be needed, and lowering college entry-course standards to the level of current remedial courses.

4.  When not dismissing or denigrating SBAC and PARCC critiques, the Fordham report evades them, even suggesting that critics should not read it: “If you don’t care for the standards…you should probably ignore this study” (p. 4).

Yet, cynically, in the very first paragraph the authors invoke the name of Sandy Stotsky, one of their most prominent adversaries, and a scholar of curriculum and instruction so widely respected she could easily have gotten wealthy had she chosen to succumb to the financial temptation of the Common Core’s profligacy as so many others have. Stotsky authored the Fordham Institute’s “very first study” in 1997, apparently. Presumably, the authors of this report drop her name to suggest that they are broad-minded. (It might also suggest that they are now willing to publish anything for a price.)

Tellingly, one will find Stotsky’s name nowhere after the first paragraph. None of her (or anyone else’s) many devastating critiques of the Common Core tests is either mentioned or referenced. Genuine research does not hide or dismiss its critiques; it addresses them.

Ironically, the authors write, “A discussion of [test] qualities, and the types of trade-offs involved in obtaining them, are precisely the kinds of conversations that merit honest debate.” Indeed.

5.  Instead of writing in neutral language as real researchers do, the authors adopt the habit of coloring their language as Common Core salespeople do. They attach nice-sounding adjectives and adverbs to what they like, and bad-sounding words to what they don’t.

For PARCC and SBAC one reads:

“strong content, quality, and rigor”

“stronger tests, which encourage better, broader, richer instruction”

“tests that focus on the essential skills and give clear signals”

“major improvements over the previous generation of state tests”

“complex skills they are assessing.”

“high-quality assessment”

“high-quality assessments”

“high-quality tests”

“high-quality test items”

“high quality and provide meaningful information”

“carefully-crafted tests”

“these tests are tougher”

“more rigorous tests that challenge students more than they have been challenged in the past”

For other tests one reads:

“low-quality assessments poorly aligned with the standards”

“will undermine the content messages of the standards”

“a best-in-class state assessment, the 2014 MCAS, does not measure many of the important competencies that are part of today’s college and career readiness standards”

“have generally focused on low-level skills”

“have given students and parents false signals about the readiness of their children for postsecondary education and the workforce”

Appraising its own work, Fordham writes:

“groundbreaking evaluation”

“meticulously assembled panels”

“highly qualified yet impartial reviewers”

Considering those who have adopted SBAC or PARCC, Fordham writes:

“thankfully, states have taken courageous steps”

“states’ adoption of college and career readiness standards has been a bold step in the right direction.”

“adopting and sticking with high-quality assessments requires courage.”

 

A few other points bear mentioning. The Fordham Institute was granted access to operational SBAC and PARCC test items. Over the course of a few months in 2015, the Pioneer Institute, a strong critic of Common Core, PARCC, and SBAC, appealed for similar access to PARCC items. The convoluted run-around responses from PARCC officials excelled at bureaucratic stonewalling. Despite numerous requests, Pioneer never received access.

The Fordham report claims that PARCC and SBAC are governed by “member states”, whereas ACT Aspire is owned by a private organization. Actually, the Common Core Standards are owned by two private, unelected organizations, the Council of Chief State School Officers and the National Governors’ Association, and only each state’s chief school officer sits on PARCC and SBAC panels. Individual states actually have far more say-so if they adopt ACT Aspire (or their own test) than if they adopt PARCC or SBAC. A state adopts ACT Aspire under the terms of a negotiated, time-limited contract. By contrast, a state or, rather, its chief state school officer, has but one vote among many around the tables at PARCC and SBAC. With ACT Aspire, a state controls the terms of the relationship. With SBAC and PARCC, it does not.[xxiv]

Just so you know, on page 71, Fordham recommends that states eliminate any tests that are not aligned to the Common Core Standards, in the interest of efficiency, supposedly.

In closing, it is only fair to mention the good news in the Fordham report. It promises on page 8, “We at Fordham don’t plan to stay in the test-evaluation business”.

 

[i] Nancy Doorey & Morgan Polikoff. (2016, February). Evaluating the content and quality of next generation assessments. With a Foreword by Amber M. Northern & Michael J. Petrilli. Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/publications/evaluating-the-content-and-quality-of-next-generation-assessments

[ii] PARCC is the Partnership for Assessment of Readiness for College and Careers; SBAC is the Smarter-Balanced Assessment Consortium; MCAS is the Massachusetts Comprehensive Assessment System; ACT Aspire is not an acronym (though, originally ACT stood for American College Test).

[iii] The reason for inventing a Fordham Institute when a Fordham Foundation already existed may have had something to do with taxes, but it also allows Chester Finn, Jr. and Michael Petrilli to each pay themselves two six figure salaries instead of just one.

[iv] http://www.gatesfoundation.org/search#q/k=Fordham

[v] See, for example, http://www.ohio.com/news/local/charter-schools-misspend-millions-of-ohio-tax-dollars-as-efforts-to-police-them-are-privatized-1.596318 ; http://www.cleveland.com/metro/index.ssf/2015/03/ohios_charter_schools_ridicule.html ; http://www.dispatch.com/content/stories/local/2014/12/18/kasich-to-revamp-ohio-laws-on-charter-schools.html ; https://www.washingtonpost.com/news/answer-sheet/wp/2015/06/12/troubled-ohio-charter-schools-have-become-a-joke-literally/

[vi] HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.

[vii] CCSSO has received 23 grants from the Bill & Melinda Gates Foundation from “2009 and earlier” to 2016 collectively exceeding $100 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=CCSSO

[viii] http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Stanford%20Center%20for%20Opportunity%20Policy%20in%20Education%22

[ix] Student Achievement Partners has received four grants from the Bill & Melinda Gates Foundation from 2012 to 2015 exceeding $13 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Student%20Achievement%20Partners%22

[x] The authors write that the standards they use are “based on” the real Standards. But, that is like saying that Cheez Whiz is based on cheese. Some real cheese might be mixed in there, but it’s not the product’s most distinguishing ingredient.

[xi] (e.g., the International Test Commission’s (ITC) Guidelines for Test Use; the ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores; the ITC Guidelines on the Security of Tests, Examinations, and Other Assessments; the ITC’s International Guidelines on Computer-Based and Internet-Delivered Testing; the European Federation of Psychologists’ Association (EFPA) Test Review Model; the Standards of the Joint Committee on Testing Practices)

[xii] Despite all the adjectives and adverbs implying newness to PARCC and SBAC as “Next Generation Assessment”, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly “higher-order”, more “authentic”, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.

See, for example, For California: Michael W. Kirst & Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, Phi Delta Kappan, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. Education Reporter, 141. See also Klein, David. (2003). “A Brief History Of American K-12 Mathematics Education In the 20th Century”, In James M. Royer, (Ed.), Mathematical Cognition, (pp. 175–226). Charlotte, NC: Information Age Publishing. For Kentucky: ACT. (1993). “A study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky’s performance-based assessment results and ACT-tested Kentucky graduates of 1992”. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent’s point of view. Louisville, KY: Author. http://www.eddatafrominnes.com/index.html ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky’s first experiment. For Maryland: P. H. Hamp, & C. B. Summers. (2002, Fall). “Education.” In P. H. Hamp & C. B. Summers (Eds.), A guide to the issues 2002–2003. Maryland Public Policy Institute, Rockville, MD. http://www.mdpolicy.org/docLib/20051030Education.pdf ; Montgomery County Public Schools. (2002, Feb. 11). “Joint Teachers/Principals Letter Questions MSPAP”, Public Announcement, Rockville, MD. http://www.montgomeryschoolsmd.org/press/index.aspx?pagetype=showrelease&id=644 ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author. http://www.humrro.org/corpsite/page/linking-teacher-practice-statewide-assessment-education

[xiii]http://www.ccsso.org/Documents/2014/CCSSO Criteria for High Quality Assessments 03242014.pdf

[xiv] A rationale is offered for why they had to develop a brand new set of test evaluation criteria (p. 13). Fordham claims that new criteria were needed, which weighted some criteria more than others. But, weights could easily be applied to any criteria, including the tried-and-true, preexisting ones.

[xv] For an extended critique of the CCSSO Criteria employed in the Fordham report, see “Appendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments” in Mark McQuillan, Richard P. Phelps, & Sandra Stotsky. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute, pp. 62-68. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xvi] Doorey & Polikoff, p. 14.

[xvii] MCAS bests PARCC and SBAC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.

[xviii] McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xix] It is perhaps the most enlightening paradox that, among Common Core proponents’ profuse expulsion of superlative adjectives and adverbs advertising their “innovative”, “next generation” research results, the words “deeper” and “higher” mean the same thing.

[xx] The document asserts, “The Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.” Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). Criteria for high-quality assessment. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7. https://edpolicy.stanford.edu/publications/pubs/847

[xxi] McQuillan, Phelps, & Stotsky, p. 46.

[xxiii] Linda Darling-Hammond, et al., pp. 16-18. https://edpolicy.stanford.edu/publications/pubs/847

[xxiv] For an in-depth discussion of these governance issues, see Peter Wood’s excellent Introduction to Drilling Through the Core, http://www.amazon.com/gp/product/0985208694

Posted in College prep, Common Core, Education policy, Education Reform, Ethics, K-12, Mathematics, Reading & Writing, research ethics, Richard P. Phelps, Testing/Assessment, Uncategorized | Tagged , , , , , , , , , , , , , | 3 Comments

How the USED has managed to get it wrong, again

https://www.washingtonpost.com/news/answer-sheet/wp/2016/02/03/dad-my-state-now-requires-11th-graders-to-take-the-sat-not-my-daughter/

An interesting dilemma. Common Core’s writers planned for a grade 11 test that would tell us whether or not students were college and career ready. Parents and state legislators don’t know who sets the cut score, what test items are on it, and what exactly a passing score on a college readiness test means, academically. Yet, all those who pass and enroll in a post-secondary educational institution are entitled to credit-bearing coursework in their freshman year.

So, why should most students wanting to go to a public college take a college admissions test, such as the ACT or SAT? No need to waste time and money for another and unnecessary test that is also “aligned to” Common Core, we are told.

But, that means the SAT and ACT companies lose a lot of money. So, what does the USED do to try to make sure they don’t lose money? It tells states that instead of a Common Core-based test in grade 11, they can require the SAT or ACT for all students for “federal accountability.” Almost a dozen states have fallen for this idiotic idea.

It turns out that an increasing number of colleges are no longer requiring SAT or ACT scores. http://news.yahoo.com/heres-happened-school-made-sats-202000551.html Why? Among other reasons, the tests can no longer tell them much about success at post-secondary institutions where all students are entitled to credit-bearing courses in their freshman year if they pass a grade 11 Common Core-based test —and can’t be given a placement test to determine remediation level. Some public college presidents or administrators in the state have already agreed to that on the state’s application for Race To The Top (RTTT) funds. Since then, more have. God help the freshman course instructor who doesn’t pass students who were declared college-ready to begin with.

Nor can the tests tell the colleges whether or not the students know much about whatever they studied in K-12. Why? The tests were developed to serve as college admissions tests to predict success in college, not as high school achievement tests. According to some math teachers, they contain material (some Algebra II, trigonometry items) that students haven’t been taught in a Common Core-based curriculum and don’t assess everything important that has been taught.

Worse yet, USED seems to want states to eliminate all other tests—the non-Common Core-based tests possibly including teacher-made tests (on the grounds of getting rid of excessive testing)—and to make passing a grade 11 college and career ready test all that is required for a high school diploma (the requirements might include course titles whose content is presumably addressed by Common Core standards, such as English, Algebra I, and Geometry). Almost everyone will have to be passed or there will be an uproar from the parents of low-achieving students. (Their writing is no longer required by the SAT.)

States adopted Common Core because they believed it would be the silver bullet that made all students college and career ready. If they also believe that all students declared college and career ready are thereby qualified to take credit-bearing coursework in post-secondary education, how can they not give a high school diploma to anyone who passes the grade 11 test? Even if they don’t know what’s on it, who set the cut score and determined who should pass, and what passing the test really means academically. The SAT and ACT are private companies and are not obligated to release any information they don’t want to release.

Who cares if all or most kids don’t want to go to college? Who cares what’s on the tests given in grade 11? All that matters is that the state has met what is required for federal accountability and will get ESSA funds and other money to give its K-12 schools, while it taxes those who can still afford to pay taxes for the increasing costs for less and less teaching and learning. Graduate schools may not care since they will be able to find enough tuition-paying qualified students from other countries.

Posted in College prep, Common Core, Education policy, ESSA, K-12, Reading & Writing, Sandra Stotsky, Testing/Assessment | Tagged , , , , , , , , | 1 Comment

Fordham report predictable, conflicted

On November 17, the Massachusetts Board of Elementary and Secondary Education (BESE) will decide the fate of the Massachusetts Comprehensive Assessment System (MCAS) and the Partnership for Assessment of College Readiness for College and Careers (PARCC) in the Bay State. MCAS is homegrown; PARCC is not. Barring unexpected compromises or subterfuges, only one program will survive.

Over the past year, PARCC promoters have released a stream of reports comparing the two testing programs. The latest arrives from the Thomas B. Fordham Institute in the form of a partial “evaluation of the content and quality of the 2014 MCAS and PARCC “relative to” the “Criteria for High Quality Assessments”[i] developed by one of the organizations that developed Common Core’s standards—with the rest of the report to be delivered in January, it says.[ii]

PARCC continues to insult our intelligence. The language of the “special report” sent to Mitchell Chester, Commissioner of Elementary and Secondary Education, reads like a legitimate study.[iii] The research it purports to have done even incorporated some processes typically employed in studies with genuine intentions of objectivity.

No such intentions could validly be ascribed to the Fordham report.

First, Common Core’s primary private financier, the Bill & Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the standards and its associated testing programs. A cursory search through the Gates Foundation web site reveals $3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or “general operating support.”[iv] Gates awarded an additional $653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio ($2,596,812), reputedly the nation’s worst.[v]

The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:

– the Human Resources Research Organization (HumRRO), which will deliver another pro-PARCC report sometime soon,[vi]
– the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the “Criteria.”, [vii]
– the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of the other federally-subsidized Common Core-aligned testing program, the Smarter-Balanced Assessment Consortium (SBAC),[viii] and
– Student Achievement Partners, the organization that claims to have inspired the Common Core standards[ix]

Fordham acknowledges the pervasive conflicts of interest it claims it faced in locating people to evaluate MCAS versus PARCC. “…it is impossible to find individuals with zero conflicts who are also experts”.[x] But, the statement is false; hundreds, perhaps even thousands, of individuals experienced in “alignment or assessment development studies” were available.[xi] That they were not called reveals Fordham’s preferences.

A second reason Fordham’s intentions are suspect rests with their choice of evaluation criteria. The “bible” of test developers is the Standards for Educational and Psychological Testing, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.

Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-sponsored the development of Common Core’s standards (Council for Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond’s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others. Thus, Fordham compares PARCC to MCAS according to specifications that were designed for PARCC.[xii]

Had Fordham compared MCAS and PARCC using the Standards for Educational and Psychological Testing, MCAS would have passed and PARCC would have flunked. PARCC has not yet accumulated the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest it will fail on all three counts.[xiii]

Third, PARCC should have been flunked had Fordham compared MCAS and PARCC using all 24+ of CCSSO’s “Criteria.” But Fordham chose to compare on only 15 of the criteria.[xiv] And those just happened to be the criteria favoring PARCC.

Fordham agreed to compare the two tests with respect to their alignment to Common Core-based criteria. With just one exception, the Fordham study avoided all the criteria in the groups “Meet overall assessment goals and ensure technical quality”, “Yield valuable report on student progress and performance”, “Adhere to best practices in test administration”, and “State specific criteria”[xv]

Not surprisingly, Fordham’s “memo” favors the Bay State’s adoption of PARCC. However, the authors of How PARCC’s false rigor stunts the academic growth of all students[xvi], released one week before Fordham’s “memo,” recommend strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also do not recommend continuing with the current MCAS, which is also based on Common Core’s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state’s mandated tests to a more responsible agency.

Perhaps the primary conceit of Common Core proponents is that ordinary multiple-choice-predominant standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)[xvii]. Ironically, it is they—opponents of traditional testing regimes—who propose that standardized tests measure everything. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.

Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:

“Research: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.”[xviii]

Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure “sustained research” in the one- or two-minute span of a standardized test question? In PARCC tests, this is done by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.

But, that is not how research works. It is hardly the type of deliberation that comes to most people’s mind when they think about “sustained research”. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; “sustained research” should be measured more authentically.

The authors of the aforementioned Pioneer Institute report recommend, as their 7th policy recommendation for Massachusetts:

“Establish a junior/senior-year interdisciplinary research paper requirement as part of the state’s graduation requirements—to be assessed at the local level following state guidelines—to prepare all students for authentic college writing.”[xix]

PARCC and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two.[xx] It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC tests “deeper” than others. In practice, the alleged deeper parts of PARCC are the most convoluted and superficial.

Appendix A of the source document for the CCSSO criteria provides three international examples of “high-quality assessments” in Singapore, Australia, and England.[xxi] None are standardized test components. Rather, all are projects developed over extended periods of time—weeks or months—as part of regular course requirements.

Common Core proponents scoured the globe to locate “international benchmark” examples of the type of convoluted (i.e., “higher”, “deeper”) test questions included in PARCC and SBAC tests. They found none.

Dr. Richard P. Phelps is editor or author of four books: Correcting Fallacies about Educational and Psychological Testing (APA, 2008/2009); Standardized Testing Primer (Peter Lang, 2007); Defending Standardized Testing (Psychology Press, 2005); and Kill the Messenger (Transaction, 2003, 2005), and founder of the Nonpartisan Education Review (http://nonpartisaneducation.org).

[i] http://www.ccsso.org/Documents/2014/CCSSO%20Criteria%20for%20High%20Quality%20Assessments%20 03242014.pdf

[ii] Michael J. Petrilli & Amber M. Northern. (2015, October 30). Memo to Dr. Mitchell Chester, Commissioner of Elementary and Secondary Education, Massachusetts Department of Elementary and Secondary Education. Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/articles/evaluation-of-the-content-and-quality-of-the-2014-mcas-and-parcc-relative-to-the-ccsso

[iii] Nancy Doorey & Morgan Polikoff. (2015, October). Special report: Evaluation of the Massachusetts Comprehensive Assessment System (MCAS) and the Partnership for the Assessment of Readiness for College and Careers (PARCC). Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/articles/evaluation-of-the-content-and-quality-of-the-2014-mcas-and-parcc-relative-to-the-ccsso

[iv] http://www.gatesfoundation.org/search#q/k=Fordham

[v] See, for example, http://www.ohio.com/news/local/charter-schools-misspend-millions-of-ohio-tax-dollars-as-efforts-to-police-them-are-privatized-1.596318 ; http://www.cleveland.com/metro/index.ssf/2015/03/ohios_charter_schools_ridicule.html ; http://www.dispatch.com/content/stories/local/2014/12/18/kasich-to-revamp-ohio-laws-on-charter-schools.html ; https://www.washingtonpost.com/news/answer-sheet/wp/2015/06/12/troubled-ohio-charter-schools-have-become-a-joke-literally/

[vi] HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.

[vii] CCSSO has received 22 grants from the Bill & Melinda Gates Foundation from “2009 and earlier” to 2015 exceeding $90 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=CCSSO

[viii] http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Stanford%20Center%20for%20Opportunity%20Policy%20in%20Education%22

[ix] Student Achievement Partners has received four grants from the Bill & Melinda Gates Foundation from 2012 to 2015 exceeding $13 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Student%20Achievement%20Partners%22

[x] Doorey & Polikoff, p. 4.

[xi] To cite just one example, the world-renowned Center for Educational Measurement at the University of Massachusetts-Amherst has accumulated abundant experience conducting alignment studies.

[xii] For an extended critique of the CCSSO criteria employed in the Fordham report, see “Appendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments” in Mark McQuillan, Richard P. Phelps, & Sandra Stotsky. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute, pp. 62-68. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xiii] Despite all the adjectives and adverbs implying newness to PARCC and SBAC as “Next Generation Assessment”, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly “higher-order”, more “authentic”, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.

See, for example, For California: Michael W. Kirst & Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, Phi Delta Kappan, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. Education Reporter, 141. See also Klein, David. (2003). “A Brief History Of American K-12 Mathematics Education In the 20th Century”, In James M. Royer, (Ed.), Mathematical Cognition, (pp. 175–226). Charlotte, NC: Information Age Publishing. For Kentucky: ACT. (1993). “A study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky’s performance-based assessment results and ACT-tested Kentucky graduates of 1992”. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent’s point of view. Louisville, KY: Author. http://www.eddatafrominnes.com/index.html ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky’s first experiment. For Maryland: P. H. Hamp, & C. B. Summers. (2002, Fall). “Education.” In P. H. Hamp & C. B. Summers (Eds.), A guide to the issues 2002–2003. Maryland Public Policy Institute, Rockville, MD. http://www.mdpolicy.org/docLib/20051030Education.pdf ; Montgomery County Public Schools. (2002, Feb. 11). “Joint Teachers/Principals Letter Questions MSPAP”, Public Announcement, Rockville, MD. http://www.montgomeryschoolsmd.org/press/index.aspx?pagetype=showrelease&id=644 ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author. http://www.humrro.org/corpsite/page/linking-teacher-practice-statewide-assessment-education

[xiv] Doorey & Polikoff, p. 23.

[xv] MCAS bests PARCC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.

[xvi] McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xvii] It is perhaps the most enlightening paradox that, among Common Core proponents’ profuse expulsion of superlative adjectives and adverbs advertising their “innovative”, “next generation” research results, the words “deeper” and “higher” mean the same thing.

[xviii] The document asserts, “The Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.” Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). Criteria for high-quality assessment. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7. https://edpolicy.stanford.edu/publications/pubs/847

[xix] McQuillan, Phelps, & Stotsky, p. 46.

[xxi] Linda Darling-Hammond, et al., pp. 16-18. https://edpolicy.stanford.edu/publications/pubs/847

Posted in Common Core, Education policy, Education Reform, Mathematics, Reading & Writing, research ethics, Richard P. Phelps, Testing/Assessment | Tagged , , , , , , , | Leave a comment

Trickle Down Academic Elitism

When [mid-20th century] I was in a private school in Northern California, I won a “gold” medal for first place in a track meet of the Private School Conference of Northern California for the high jump [5’6”]—which I thought was pretty high.

My “peers” in the Bay Area public high schools at the time were already clearing 6 feet, but I was, in fact, not in their league.

As the decades went by, high school students were clearing greater and greater heights, in the same way records were falling in all other sports.

The current high school record, set in July 2002, by Andra Manson of Kingston, Jamaica, at a high school in Brenham, Texas, is 7 feet, 7 inches. [high jump, not pole vault].

How did this happen? Well, not by keeping progress in the high jump a secret.

A number of private schools in the Boston area have put an end to all academic prizes and honors, to keep those who don’t get them from feeling bad, but they still keep score in games, and they still report on and give prizes for elite academic performances.

It seems obvious to me that by letting high school athletes know that the record for the high jump was moving up from five feet nothing to 7 feet, 7 inches, some large group of high school athletes decided to work at it and try to jump higher, with real success since 1950.

The Boston Globe has about 150 pages a year on high school sports, highlighting best performances in and results from all manner of athletic competitions. This must fuel ambition in other high school athletes to achieve more themselves, and even to merit mention in the media.

When it comes to high school academic achievement, on the other hand, The Boston Globe seems content to devote one page a year just to the valedictorians at the public high schools in Boston itself [usually half of them were born in another country, it seems].

Why is it that we are comfortable encouraging, supporting, seeking and celebrating elite performance in high school sports, but we seem shy, embarrassed, reluctant, ashamed, and even afraid to encourage, support, and acknowledge—much less celebrate—outstanding academic work by high school students?

Whatever the reasons, it seems likely that what we do will bring us more and better athletic efforts and achievements by high school students, while those students who really do want to achieve at the elite levels in their academic work can just keep all that to themselves, thank you very much. Seems pretty stupid to me, if we want, as we keep saying we want, higher academic achievement in our schools. Just a thought.

Posted in College prep, Education Fraud, Education policy, K-12, Testing/Assessment, Will Fitzhugh | Tagged , , , | Leave a comment

Common Core’s Language Arts

It is often said that scientific writing is dull and boring to read. Writers choose words carefully; mean for them to be interpreted precisely and, so, employ vocabulary that may be precise, but is often obscure. Judgmental terms—particularly the many adjectives and adverbs that imply goodness and badness or better and worse—are avoided. Scientific text is expected to present a neutral communication background against which the evidence itself, and not the words used to describe the evidence, can be evaluated on its own merits.

As should be apparent to anyone exposed to Common Core, PARCC, and SBAC publications and presentations, most are neither dull nor boring, and they eschew precise, obscure words. But, neither are they neutral or objective. According to their advocates, Common Core, PARCC, and SBAC are “high-quality”, “deeper”, “richer”, “rigorous”, “challenging”, “stimulating”, “sophisticated”, and assess “higher-order” and “critical” thinking, “problem solving”, “deeper analysis”, “21st-Century skills”, and so on, ad infinitum.

By contrast, they describe the alternatives to Common Core and Common Core consortia assessments as “simple”, “superficial”, “low-quality”, and “dull” artifacts of a “19th-Century” “factory model of education” that relies on “drill and kill”, “plug and chug”, “rote memorization”, “rote recall”, and other “rotes”.

Our stuff good. Their stuff bad. No discussion needed.

This is not the stuff of science, but of advertising. Given the gargantuan resources Common Core, PARCC, and SBAC advocates have had at their disposal to saturate the media and lobby policymakers with their point of view, that opponents could muster any hearing at all is remarkable. [1]

The word “propaganda” may sound pejorative, but it fits the circumstance. Advocates bathe their product in pleasing, complimentary vocabulary, while splattering the alternatives with demeaning and unpleasant words. Only sources supportive of the preferred point of view are cited as evidence. Counter evidence is either declared non-existent and suppressed, or discredited and misrepresented. [2]

Their version of “high-quality” minimizes the importance of test reliability (i.e., consistency, and comparability of results), an objective and precisely measurable trait, and maximizes the importance of test validity, an imprecise and highly subjective trait, as they choose to define it. [3] “High-quality”, in Common Core advocates’ view, comprises test formats and item types that match their progressive, constructivist view of education. [4] “High-quality” means more subjective, and less objective, testing. “High-quality” means tests built the way they like them.

“High quality” tests are also more expensive, take much longer to administer, and unfairly disadvantage already disadvantaged children, due to their lower likelihood of familiarity with complex test formats and computer-based assessment tools.

Read, for example, the report Criteria for high-quality assessment, written by Linda Darling-Hammond’s group at Stanford’s education school, people at the Center for Educational Research on Standards and Student Testing (CRESST), housed at UCLA, and several other sympathizers. [5] These are groups with long histories of selective referencing and dismissive reviews. [6] The little research that supports their way of seeing things is highly praised. The far more voluminous research that contradicts their recommendations is ignored, demonized, ridiculed, or declared non-existent.

Unlike a typical scientific study write-up, Criteria for high-quality assessment brims with adjectival and adverbial praise for its favored assessment characteristics. Even its 14-page summary confronts the reader with “high-quality” 24 times; “higher” 18 times; “high-fidelity” seven times; “higher-level” four times; “deep”, “deeply”, or “deeper” 14 times; “critical” or “critically” 17 times; and “valuable” nine times. [7]

As Orwell said, control language and you control public policy. Common Core, PARCC, and SBAC proponents are guilty not only of biased promotion, selective referencing, and dismissive reviews but of “floating” the definitions of terms.

For example, as R. James Milgram explains:

“The dictionary meaning of “rigorous” in normal usage in mathematics is “the quality or state of being very exact, careful, or strict” but in educationese it is “assignments that encourage students to think critically, creatively, and more flexibly.” Likewise, educationists may use the term rigorous to describe “learning environments that are not intended to be harsh, rigid, or overly prescriptive, but that are stimulating, engaging, and supportive.” In short the two usages are almost diametrically opposite.” [8]

Such bunkum has sold us Common Core, PARCC, and SBAC. The progressive education/constructivist radical egalitarians currently running many U.S. education schools can achieve their aims simply by convincing super-naïve but well-endowed foundations and the U.S. Education Department (under both Republican and Democratic administrations) that they intend “higher”, “deeper”, “richer”, “more rigorous” education when, in fact, they target a dream of Rousseau-ian-inspired discovery-learning. They crave the open-inquiry, students-build-your-own-education of Summerhill School, even for the poor, downtrodden students who arrive at school with little to build from.

So many naïve, gullible, well-intentioned wealthy foundations dispensing money to improve US education. So many experienced, well-rehearsed, true believers ready to channel that money in the direction that serves their goals.

 

[1] For example, from the federal government alone, PARCC received $185,862,832 on August 13, 2013. https://www2.ed.gov/programs/racetothetop-assessment/parcc-budget-summary-tables.pdf ; SBAC received $175,849,539 to cover expenses to September 30, 2014. https://www2.ed.gov/programs/racetothetop-assessment/sbac-budget-summary-tables.pdf. A complete accounting, of course, would include vast sums from the Bill and Melinda Gates Foundation, other foundations, the CCSSO, NGA, Achieve, and state governments.

[2] This behavior—of selective referencing and dismissive reviews (i.e., declaring that contrary research either does not exist or is for some other reason not worth considering)—is not new to the Common Core campaign. It has been the standard operating procedure among U.S. education research and policy elites for decades. But, some of the most prominent and frequent users of these censorial techniques in the past are now high-profile salespersons for the Common Core, PARCC, and SBAC. See, for example, Richard P. Phelps. (2012, June). Dismissive reviews: Academe’s Memory Hole. Academic Questions, 25(2), pp. 228–241. http://www.nas.org/articles/dismissive_reviews_academes_memory_hole ; Phelps, R. P. (2007, Summer). The dissolution of education knowledge. Educational Horizons, 85(4), 232-247. http://nonpartisaneducation.org/Foundation/DissolutionOfKnowledge.pdf ; and Phelps, R.P. (2009). Worse than Plagiarism? Firstness Claims & Dismissive Reviews, Nonpartisan Education Review / Resources. Retrieved August 29, 2015 from http://www.nonpartisaneducation.org/Review/Resources/WorseThanPlagiarism.ppt

[3] Ebel, Robert L. 1961. “Must All Tests Be Valid?” American Psychologist. 16, pp.640–647.

[4] “Constructivism is basically a theory — based on observation and scientific study — about how people learn. It says that people construct their own understanding and knowledge of the world, through experiencing things and reflecting on those experiences.” Here are two descriptions of constructivism: one supportive, http://www.thirteen.org/edonline/concept2class/constructivism/ and one critical, http://epaa.asu.edu/ojs/article/view/631

[5] Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, Claude M. Steele. (2013, June). Criteria for high-quality assessment. Stanford Center for Opportunity Policy in Education; Center for Research on Standards and Student Testing; & Learning Sciences Research Institute, University of Illinois at Chicago.

[6] See, for example, Richard P. Phelps. (2012). The rot festers: Another National Research Council report on testing. New Educational Foundations, 1. http://www.newfoundations.com/NEFpubs/NEFv1n1.pdf ; (2015, July); The Gauntlet: Think tanks and federally funded centers misrepresent and suppress other education research. New Educational Foundations, 4. http://www.newfoundations.com/NEFpubs/NEF4Announce.html

[7] CCSSO. (2014). Criteria for procuring and evaluating high-quality assessments.

[8] See http://edglossary.org/rigor/. Dr. Milgram’s observation is expressed in R.P. Phelps & R.J. Milgram. (2014, September). The revenge of K-12: How Common Core and the new SAT lower college standards in the U.S. Boston: Pioneer Institute, p. 41. http://pioneerinstitute.org/featured/common-core-math-will-reduce-enrollment-in-high-level-high-school-courses/

Posted in Common Core, Education policy, Education Reform, Ethics, K-12, research ethics, Richard P. Phelps, Testing/Assessment, Uncategorized | Tagged , , , , , , | 1 Comment

Wayne Bishop’s observations on the Aspen Ideas Festival session, “Is Math Important?”

Editors’ Note:

David Leonhardt is Washington Bureau Chief for the New York Times, won a Pulitzer Prize for his reporting on economic issues, and majored in applied mathematics as an undergraduate at Yale. Mr. Leonhardt chaired the panel, “Deep Dive: Is Math Important?” an “event” in the program track “The Beauty of Mathematics”. Other program track events included individual lectures from each of the panelists.

Mathematicians might consider the panel composition rather odd, and ideologically one-sided. Three panelists are not mathematicians, but are wholehearted believers in constructivist approaches to math education, often derided as “fuzzy math”. Two of them claim, ludicrously, that high-achieving East Asian countries teach math their way. The aforementioned panelists are: journalist Elizabeth Green, education professor Jo Boaler, and College Board’s David Coleman, with a degree in English lit and classical philosophy. When only one side is allowed to talk, of course, it can make any claims it likes.

Watch for yourself: Aspen Ideas Festival: Deep Dive: Is Math Important?

http://video.pbs.org/video/2365521689/

Professor Bishop’s essay, written in the form of a letter to David Leonhardt, can be found here.
http://nonpartisaneducation.org/Review/Essays/v11n1.pdf

 

Posted in Education Fraud, K-12, math, Mathematics, Wayne Bishop | Leave a comment

David Coleman in Charge

Wayne Bishop recently made me aware of the unfortunately completely one-sided discussion of US mathematics education at the recent Aspen Ideas Festival.

David Leonhardt is Washington Bureau Chief for the New York Times, won a Pulitzer Prize for his reporting on economic issues, and majored in applied mathematics as an undergraduate at Yale. Mr. Leonhardt chaired the panel, “Deep Dive: Is Math Important?” an “event” in the program track “The Beauty of Mathematics”. Other program track events included individual lectures from each of the panelists.

Mathematicians might consider the panel composition rather odd, and ideologically one-sided. Three panelists are not mathematicians, but are wholehearted believers in constructivist approaches to math education, often derided as “fuzzy math”. Two of them claim, ludicrously, that high-achieving East Asian countries teach math their way. The aforementioned panelists are: journalist Elizabeth Green, education professor Jo Boaler, and College Board’s David Coleman, with a degree in English lit and classical philosophy. When only one side is allowed to talk, of course, it can make any claims it likes.

Watch for yourself.

http://video.pbs.org/video/2365521689/

Observe David Coleman from minute 25:40 on, starting while Elizabeth Green is talking.

Then listen, from minute 26:55 on, as he asserts a “kind of dirty little secret” that:

“the worst math problems of all are test prep problems” these are problems manufactured to prepare for an exam and they are typically done, utterly… … if any good science or craft goes into making a really reliable assessment problem for an exam, none of that goes into test prep material. The test developers have hidden from… …because they are trying to hide the exam from the test prep people, to try to keep it, right?”

Test developers, including the College Board, at least until Coleman arrived, have made available for free complete retired operational exams for test prep. These are not “manufactured to prepare for an exam”. They are the highest quality test items that have survived the lengthy gauntlet of editorial review, item review, bias review, more editing, field trials, more editing, comprehensive statistical analyses, more editing if needed, and still more statistical analysis.

Coleman has been at College Board for over three years, plenty enough time to learn the trade. That, even now, he says “if any good science or craft goes into making a really reliable assessment problem” should frighten us all.

Posted in College prep, Common Core, Education policy, K-12, math, Mathematics, Richard P. Phelps, Testing/Assessment | Tagged , , , , , , , | 1 Comment