Summary/Review of CCRC’s Working Paper (Empirical)

The main findings were that the COMPASS test is better at predicting student success in math than in English. In both subjects, COMPASS does relatively well when predicting which students will perform well in a course. It is much less successful when determining which students will perform at a C level (16). For these reasons and others, it is recommended that colleges use multiple measures or initiate alternative developmental paths to success.

Scott-Clayton says, “The predictive power of placement exams is in a sense quite impressive given how short they are (often taking about 20-30 minutes per subject/module). But overall the correlation between scores and later course outcomes is relatively weak” (37).

When data is examined to see what happens if all students are placed in college-level coursework, the English placement tests “increase the success rates in college-level coursework […]  by 11 percentage points” (27); however, “[…] these tests generate virtually no reduction in the overall severe error rate (in other words, while the placement tests reduce severe overplacements, they increase underplacements by the same amount)” (27).

REVIEW:

One immediate concern I had when reading the data analysis is how withdrawals are treated the same as failing grades in Scott-Clayton’s analysis (14). Part of the rationale given for including the withdrawals as failures is, “because withdrawal decisions are not likely to be random, but rather may be closely linked to expectations regarding course performance” (14). This is the only rationale provided, and it is not enough. If Scott-Clayton assumed students only withdraw when they think they will fail anyway, this discounts the number of students who withdraw for a multitude of personal reasons each semester that have nothing to do with the course and everything to do with what is happening outside of the classroom.

In the predictive validity section, Scott-Clayton quotes Sawyer as saying that conditional probability of success can be tallied with reasonable accuracy when 25% or fewer students are placed in developmental courses (8), but students are placed into developmental courses at much higher rates than 25%. Her solution was to eliminate the very low-scorers in the sensitivity analysis. I have the benefit of hindsight here that the researcher does not, as ACT Inc. admitted in 2015 (three years after this publication), “A thorough analysis of customer feedback, empirical evidence and postsecondary trends led us to conclude that ACT Compass is not contributing as effectively to student placement and success as it had in the past,” before voluntarily phasing out the test. Removing the very low-scorers probably could not account for the large percent of students who were being placed in developmental courses. Besides, the lowest levels of developmental courses exist to serve low-performing students. What is of greatest concern is how many borderline students have been under-placed into a course that delays their timeline into credit-bearing courses. That did not seem to be the priority of this analysis.

“ACT Compass.” ACT. ACT Inc, n.d. Web. 10 Feb. 2016.

Scott-Clayton, Judith. “Do High-Stakes Placement Exams Predict College Success?” Community College Research Center. Columbia University, Feb. 2012. Web. 2 Feb. 2016.

 

Leave a comment