Blog #5: Dissertation– Predictive Validity of MC Test for CC Placement

Verbout, Mary F. Predictive Validity of a Multiple-choice Test for Placement in a Community College. Diss. Indiana University of Pennsylvania, 2013. Ann Arbor, 2013. 3592228. Web. 18 Mar. 2016.

Verbout studied scores from Compass to prove that the cut-off scores used for placement did not correlate with success in the courses. The program in the study was optimal because while students were required to take the test, they were not required to use the results when selecting their courses.

 

According to the test creators, “The test is efficient; an algorithm recalculates the students’ overall score with each answer, and as soon as the internal calculation achieves certainty, the test is over, and the score is calculated (ACT Inc., 2006)” (3). It should be noted that “The eight domains addressed by the writing diagnostics are: punctuation, spelling, capitalization, usage, verb formation/agreement, relationships of clauses, shifts in construction, organization” (75-76), so the test does not coincide with the overall goals of the courses.

 

Research questions: does the placement based on Compass scores predict success in FYC1 and 2; is there a significant difference between the mean scores for White,  Hispanic, and Native American students; is there a significant difference between success rates for White, Hispanic and Native American students in FYC 1 and 2 (67).

 

SPSS and Excel were the tools used to process data. The tests were ANOVA and Chi-Square. Of the students who scored in the 13-37 range on Compass, 23% of the students who enrolled in BW1 passed FYC, 39% of BW2 passed, and 76% of the students who chose to enroll in FYC1 against recommendation passed it!

 

The group of test scores for direct BW2 enrollment ended up with pass rates below the students who chose to skip BW2 and enroll directly into FYC1 (46% to 81%) (82).

The results about race were: the test scores placed Hispanic and Native American students into BW at a higher rate than White students. The pass rates for students in FYC were not significantly different. What was of interest is that the Hispanic and Native American students were more likely to enroll in the course that was recommended for them based on test scores than the White students, who were more likely to choose to take FYC against test score recommendation (84).  “With one exception (Hispanic students scoring 13 -37, first  course BW2 students in each score range and ethnic category completed FYC1 and FYC2 at higher rates when they began in a more advanced course” (85).

The researcher’s conclusions were to discontinue use of Compass and consider creation of studio-model BW courses similar to CCBS’s Accelerated Learning Program (ALP).
The results and methodology sections were not as full as the excellent literature review that combines educational and composition theorists. Verbout does not stipulate the year range for this study and how long students were followed nor include whether the students in BW1 and BW2 also had additional remedial courses to take that could slow their entrance into FYC. More information would be needed in order for the study to be replicable, but the statistics were clear, direct, and surprising.

Advertisements

#4 Dissertation about Predictive Factors associated with Need for Remediation

Whiton, John C. Predictive Factors Associated with Newly Graduated High-School Students’ Enrollment in a Remedial Course at a Community College.  Diss. Liberty University, 2015.  Ann Arbor, 2016. 10027104. Web. 14Mar. 2016.

Whiton administered a questionnaire at a community college in Maryland. His research question was which factors best predicted student need for remediation. His research focused on recent high-school graduates and analyzed factors such as age, gender, race, high-school GPA and social and cultural capital. The questionnaire was an already established survey.

The survey itself was administered as a convenience sampling. He needed at least 150 survey responses from students who were enrolled in remedial classes (reading, writing, math) and 150 from students who were enrolled in credit-bearing courses.

Many of the questions asked were coded into binary and a logistic regression analysis was used for data analysis. The surveys were analyzed based on student response to the question of whether or not they were enrolled in a remedial course.

Students were surveyed in a variety of class types (math, English, performing arts, etc.) as convenience allowed. The researcher needed to garner instructor permission to administer the survey in class. The survey took 10 minutes of class-time, and students placed their completed surveys into an envelope that was then placed with other enveloped-surveys into a larger envelope. Students were eligible for a drawing for a $200 gift card if they filled out a separate sheet with their contact information. Surveys were kept anonymous, and data was entered into an Excel spreadsheet that was coded in binary terms whenever possible.

In the methodology section, Whiton describes how he eliminated one question category as all but 3% answered in the affirmative. For other questions, he regrouped some of the answers. For instance, in one question about how often parents spoke to the student about college, two answers were combined into a category: “never/sometimes” were grouped as one while “often” was used as the reference group. Whiton describes other instances of this regrouping, most notably when parental income was grouped at below $35,000, $35,001-$50,000, and above $50,000.

His findings were that students were 73% less likely to enter a remedial course if they had taken math above Algebra 2, 68.9% less likely if the annual family income was above $50,000, and 53.5% less likely if students discuss community, national, and world events with parents/guardians often.

ANALYSIS:

As a composition researcher, I am troubled by the focus on math. Whiton provides a literature review for why he focuses on the highest level of math taken in high school as a predictive quality, but he also quotes the Scott-Clayton piece I referred to in an earlier blog post as showing that math placement exams are traditionally more reliable for placement than the English placement tests. If anything, he has only proven that the math scores are predictive of who will place into remedial math or not based on prior math experience.

The surveys were given in a variety of courses, so students could have been in a credit-bearing course such as Intro to Fine Arts but also co-enrolled in a remedial course. Additionally, the students who responded “yes” to being enrolled in a remedial course could have been enrolled strictly in math or in an English course or in a total of three remedial courses. As some of the questions focus on composition skills such as language, it seems like a missed opportunity to not group the students by which courses they were enrolled in.

Summary/Review of CCRC’s Working Paper (Empirical)

The main findings were that the COMPASS test is better at predicting student success in math than in English. In both subjects, COMPASS does relatively well when predicting which students will perform well in a course. It is much less successful when determining which students will perform at a C level (16). For these reasons and others, it is recommended that colleges use multiple measures or initiate alternative developmental paths to success.

Scott-Clayton says, “The predictive power of placement exams is in a sense quite impressive given how short they are (often taking about 20-30 minutes per subject/module). But overall the correlation between scores and later course outcomes is relatively weak” (37).

When data is examined to see what happens if all students are placed in college-level coursework, the English placement tests “increase the success rates in college-level coursework […]  by 11 percentage points” (27); however, “[…] these tests generate virtually no reduction in the overall severe error rate (in other words, while the placement tests reduce severe overplacements, they increase underplacements by the same amount)” (27).

REVIEW:

One immediate concern I had when reading the data analysis is how withdrawals are treated the same as failing grades in Scott-Clayton’s analysis (14). Part of the rationale given for including the withdrawals as failures is, “because withdrawal decisions are not likely to be random, but rather may be closely linked to expectations regarding course performance” (14). This is the only rationale provided, and it is not enough. If Scott-Clayton assumed students only withdraw when they think they will fail anyway, this discounts the number of students who withdraw for a multitude of personal reasons each semester that have nothing to do with the course and everything to do with what is happening outside of the classroom.

In the predictive validity section, Scott-Clayton quotes Sawyer as saying that conditional probability of success can be tallied with reasonable accuracy when 25% or fewer students are placed in developmental courses (8), but students are placed into developmental courses at much higher rates than 25%. Her solution was to eliminate the very low-scorers in the sensitivity analysis. I have the benefit of hindsight here that the researcher does not, as ACT Inc. admitted in 2015 (three years after this publication), “A thorough analysis of customer feedback, empirical evidence and postsecondary trends led us to conclude that ACT Compass is not contributing as effectively to student placement and success as it had in the past,” before voluntarily phasing out the test. Removing the very low-scorers probably could not account for the large percent of students who were being placed in developmental courses. Besides, the lowest levels of developmental courses exist to serve low-performing students. What is of greatest concern is how many borderline students have been under-placed into a course that delays their timeline into credit-bearing courses. That did not seem to be the priority of this analysis.

“ACT Compass.” ACT. ACT Inc, n.d. Web. 10 Feb. 2016.

Scott-Clayton, Judith. “Do High-Stakes Placement Exams Predict College Success?” Community College Research Center. Columbia University, Feb. 2012. Web. 2 Feb. 2016.

 

Summary and Review of the TYCA White Paper on Placement Reform

My project this semester is to test the validity of a Rhetorical Analysis Diagnostic Exam (RADE) that my department has created.  This test is meant to supplement or replace the English portion of Accuplacer as a placement exam for the writing courses at my institution. My WPD gave me the task of reviewing TYCA’s White Paper before revising RADE. In the next two weeks, I will be revising the test a colleague created. I will also be reformatting the test within Blackboard.

 

The TYCA White Paper is meant to guide decisions made about placement at two-year colleges given the current state of upheaval created by the loss of COMPASS. COMPASS was an inexpensive option for many schools. Its dissolvement gives  two-year colleges “an opportunity and a challenge: how to replace an easy-to-use and relatively cheap placement process which has been shown to be severely flawed with a practical and affordable process that is supported by current research” (2).

 

In the “business as usual” section, the committee explains the flaws inherent in replacing one flawed high-stakes test with another. The assessments are problematic for many reasons, not least of which is weak validity. Additionally, the practice of using a high-stakes exam further divides institutional practices from the professional expertise of the faculty.

 

TYCA  has long recognized that the most effective way to evaluate students’ writing ability is to assess their writing, but there are problems with implementing this type of placement in a two-year college. To be most effective, the writing sample should not be a single piece of writing and it should be “situated within the context of the institution” (7). It should also be assessed by faculty who teach the courses the students will be placed into. As this process is both time-consuming and costly, most two-year colleges will not be able to implement it.

 

The committee recommends basing placement on multiple measures rather than one high-stakes test or a stand-alone writing sample. Possible measures include: high school GPA or transcript, Learning and Study Strategies Inventory (LASSI), interview, writing sample, previous college coursework, and/or a portfolio (8-9). The committee supports use of Directed Self-Placement but also recognizes that it might not be feasible for many institutions. Other options are in-class diagnostic writing samples with the opportunity to move into credit-bearing courses or acceleration models that allow students to take a credit-bearing course alongside a Basic Writing course and progress to 101 on the merit of the credit-bearing course’s grade.

 

If a stand-alone test is going to be used,  special attention must be made to ensure it is fair and non-discriminatory to students of differing backgrounds, age ranges, etc. Among the recommendations of the committee is that all reforms should “be grounded in disciplinary knowledge” and “be assessed and validated locally” (21).

 

The TYCA White Paper will serve as an invaluable resource as my colleagues and I continue to argue for use of RADE as an alternative to Accuplacer. Many of the reforms mentioned in the white paper are not possible for my institution, as the administration has already decided to use Accuplacer and will not pay for additional tests to be administered. Additionally, much of our enrollment comes from students who expect to register for classes the day they enroll in the college. For that reason, multiple measures (and DSP which relies on multiple measures) will not be possible without endangering enrollment procedures, something the college is understandably loathe to do. My personal take-away from the article is the importance of making sure our diagnostic test does not unfairly privilege any demographic groups over others.
TYCA Research Committee. “TYCA White Paper on Writing Placement Reform.” Teaching English in the Two-Year College. Pending, 09/2016.