Censorship at Education Next

In response to their recent misleading articles about a fall 2015 Mathematica report that claims to (but does not) find predictive validity for the PARCC test with Massachusetts college students, I wrote the text below and submitted it to EdNext as a comment to the article. The publication neither published my comment nor provided any explanation. Indeed, the comments section appears to have vanished entirely.


“First, the report attempts to calculate only general predictive validity. The type of predictive validity that matters is “incremental predictive validity”—the amount of predictive power left over when other predictive factors are controlled. If a readiness test is highly correlated with high school grades or class rank, it provides the college admission counselor no additional information. It adds no value. The real value of the SAT or ACT is in the information it provides admission counselors above and beyond what they already know from other measures available to them.

“Second, the study administered grade 10 MCAS and PARCC tests to college students at the end of their freshmen years in college, and compared those scores to their first-year grades in college. Thus, the study measures what students learned in one year of college and in their last two years of high school more than it measures what they knew as of grade 10. The study does not actually compute predictive validity; it computes “concurrent” validity.

“Third, student test-takers were not representative of Massachusetts tenth graders. All were volunteers; and we do not know how they learned about the study or why they chose to participate. Students not going to college, not going to college in Massachusetts, or not going to these colleges in Massachusetts could not have participated. The top colleges—where the SAT would have been most predictive—were not included in the study (e.g., U. Mass-Amherst, any private college, or elite colleges outside the state). Students not going to college, or attending occupational certificate training programs or apprenticeships–for whom one would suspect the MCAS would be most predictive–were not included in the study.”

This entry was posted in Censorship, College prep, Common Core, Education journalism, Ethics, information suppression, K-12, research ethics, Richard P. Phelps, Testing/Assessment and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *