Blog #5: Dissertation– Predictive Validity of MC Test for CC Placement

Verbout, Mary F. Predictive Validity of a Multiple-choice Test for Placement in a Community College. Diss. Indiana University of Pennsylvania, 2013. Ann Arbor, 2013. 3592228. Web. 18 Mar. 2016.

Verbout studied scores from Compass to prove that the cut-off scores used for placement did not correlate with success in the courses. The program in the study was optimal because while students were required to take the test, they were not required to use the results when selecting their courses.

 

According to the test creators, “The test is efficient; an algorithm recalculates the students’ overall score with each answer, and as soon as the internal calculation achieves certainty, the test is over, and the score is calculated (ACT Inc., 2006)” (3). It should be noted that “The eight domains addressed by the writing diagnostics are: punctuation, spelling, capitalization, usage, verb formation/agreement, relationships of clauses, shifts in construction, organization” (75-76), so the test does not coincide with the overall goals of the courses.

 

Research questions: does the placement based on Compass scores predict success in FYC1 and 2; is there a significant difference between the mean scores for White,  Hispanic, and Native American students; is there a significant difference between success rates for White, Hispanic and Native American students in FYC 1 and 2 (67).

 

SPSS and Excel were the tools used to process data. The tests were ANOVA and Chi-Square. Of the students who scored in the 13-37 range on Compass, 23% of the students who enrolled in BW1 passed FYC, 39% of BW2 passed, and 76% of the students who chose to enroll in FYC1 against recommendation passed it!

 

The group of test scores for direct BW2 enrollment ended up with pass rates below the students who chose to skip BW2 and enroll directly into FYC1 (46% to 81%) (82).

The results about race were: the test scores placed Hispanic and Native American students into BW at a higher rate than White students. The pass rates for students in FYC were not significantly different. What was of interest is that the Hispanic and Native American students were more likely to enroll in the course that was recommended for them based on test scores than the White students, who were more likely to choose to take FYC against test score recommendation (84).  “With one exception (Hispanic students scoring 13 -37, first  course BW2 students in each score range and ethnic category completed FYC1 and FYC2 at higher rates when they began in a more advanced course” (85).

The researcher’s conclusions were to discontinue use of Compass and consider creation of studio-model BW courses similar to CCBS’s Accelerated Learning Program (ALP).
The results and methodology sections were not as full as the excellent literature review that combines educational and composition theorists. Verbout does not stipulate the year range for this study and how long students were followed nor include whether the students in BW1 and BW2 also had additional remedial courses to take that could slow their entrance into FYC. More information would be needed in order for the study to be replicable, but the statistics were clear, direct, and surprising.

Paper 4: Methodologies and Basic Writing

In researching Basic Writing and issues related to access to technology, I’ve run into a couple of methodologies. Most fall under a similar category. They stipulate the issue with a literature review and then further discuss the issue with classroom anecdotes. The anecdotal evidence may take the form of a case study wherein a few students are provided as objects of study. It may also draw from the experience of one course with student work called upon as evidence. The anecdotes or descriptions are narrative in nature. Other pieces are less pragmatic and more thought-pieces that cover a literature review, stipulate the problem, and then call for change or action.

The Journal of Basic Writing is the main publication for Basic Writing scholarship. Submission guidelines do not mention preference for methodology: http://wac.colostate.edu/jbw/submit.cfm. Then again, other journals in Composition Studies (CCC   and Computers and Composition) do not forbid any methodologies, though they do at least list empirical research as an option, unlike JBW. The preponderance of narrative articles does not appear to be a goal of the journal’s but rather something that happens. I wonder if the bulk of submissions are in this style. The articles I have located about technology in JBW seem to rely heavily on those methods with little quantitative research used.

When I interviewed Dr. Kevin DePew about methods used in BW research, he indicated he was not surprised by a reliance on narrative. He speculated that this may be a result of how the people hired to teach BW often come from a variety of disciplines that may value narrative. Furthermore, their own teaching experience may be “trial by fire” which then also lends itself to a narrative tradition in the articles (DePew).

In regard to credibility, I do wonder that BW scholarship is not often cited by other subdisciplines, in much the way that education scholarship is rarely cited by composition studies. Some of the blame for this, though, may be because the journals live in different databases than the journals in use by comp studies scholars. While JSTOR is the go-to database for my fellow students, I have to troll through ERIC for JBW articles. An additional concern I have is that there are plenty of online journals but they are not housed in the academic databases even though they too are peer-reviewed. There is a plethora of work being done; one wonders if it is less referenced because one has to know to look for it.

At this point, my only benchmark for how authoritative or accepted the BW methods are comes from how they are cited within BW because these articles do not appear to be cross-referenced in fields outside of BW. Again, I am unsure of whether this is because of the database issue or a lack of respect for methodologies used.

As referred to in some of my earlier papers, DeGenaro and White argue: “Instead of moving toward a consensus, our researchers too often talk past each other, positions are reiterated rather than reconsidered, and we move in circles” (23). It feels like BW scholars would rather redefine an issue or reframe it in the context of their own institution or classroom context and then draw their own conclusion based on teaching experience rather than use the work that has been done in other contexts to extrapolate to their own. I see this happening at conferences too where someone’s idea will be invalidated by a comment: “Well, that won’t work for my students.”

It may feel as though I have drifted from the main questions for this paper, but for me, all of this is part of the question of methodology. If I am not 100% comfortable with the methodological choices of my discipline, must I hold rigidly to them? I distrust the trend of hypothesizing significant change based on the anecdotal evidence of one person’s classroom experience.

In previous think-pieces, scholars have observed that socioeconomic status affects access to and comfort using technology. They’ve also observed that students in Basic Writing are often the most disenfranchised. To my knowledge, no one has studied the extent to which this disadvantage exists and to what extent it impacts their writing ability. In my own classroom, I vacillate between not wanting to contribute to additional barriers to student learning and also not creating a new barrier to their success in FYC by perpetuating the digital divide in BW too.

In 2013 and 2014, I surveyed the writing classes at my institution in order to ascertain student access to technology outside of the classroom and their comfort using it. Classes were scientifically sampled for representation from all four of our campuses, as well as all four levels of our writing classes (two BW levels and two FYC levels), and the times of day for the classes. We eventually surveyed our online courses as well. The survey was given in the first four weeks of the semester and again in the last four weeks of the semester. The same questions were asked in both the pre and post surveys with the addition of a question in the post survey: Did their access to technology change during the semester and, if so, did it improve or worsen.

My preliminary findings are many. The short version is that our students who do not have easy access to the internet when off campus often find ways to achieve better access during the semester when it is a requirement of the course or there is a felt need. We still have a small percentage of students who do not have any access to the internet at all when not at school. The previous assumption was that students who did not have easy access to the internet for their assignments would get too frustrated finding time to complete their assignments on campus and would withdraw from their courses. We still have students who withdraw from all levels of writing classes, but access to technology does not seem to be a major contributing factor to their persistence. The meeting time for the course did not have a significant impact on their answers. Which campus they attended did. The three campuses that are around the same socioeconomic level ($60,000-$80,000 for average annual income) all reported easier access to the internet and computers than our lowest socioeconomic level campus, where results were dismal.

Data from all four levels of writing classes at SWIC, representative sampling.
Data from all four levels of writing classes at SWIC, representative sampling.

surveydata2

I also have data for students’ comfort levels with a variety of tasks from remembering multiple usernames and passwords to typing their papers to creating and uploading video files. These can all be cross-referenced with the different campuses and different course levels.

Essentially, I would like to do something with this information that I have, even running the surveys again in a year or two to see how the numbers have changed as students rely more heavily on their mobile devices. I would also love to run the survey at other local schools to see how the numbers change when at a public university with a strong commuter population compared to a private university with a larger percentage of residential students.

The dilemma I am having is: will this type of research be accepted and welcomed in the BW or composition journals? Or is this type of research merely rarely done in our fields because we are not generally numbers people?

Works Cited

DeGenaro, William and Edward M. White. “Going around in Circles: Methodological Issues in Basic Writing Research” Journal of Basic Writing 19.1 (2000): 22-34. ERIC. Web. 16 Oct. 2015.

DePew, Kevin. Personal interview. Skype. 2 Oct. 2015.

Hancock, Nicole. “SWIC Student Access to Technology and Comfort Using It in Class: Survey Results” Outcomes Assessment Breakfast, Southwestern Illinois College, Belleville, IL. 19 Aug. 2015. Keynote Address.

PAB #4: Methodology

Entry 1

DeGenaro, William and Edward M. White. “Going around in Circles: Methodological Issues in Basic Writing Research” Journal of Basic Writing 19.1 (2000): 22-34. ERIC. Web. 16 Oct. 2015.

Keywords: methodology, basic writing, practitioner results, data, consensus

The premise of this article is that Basic Writing practitioners need to be more cohesive with their methodologies and present more of a united front as a subdiscipline. The one issue that BW scholars have been able to agree upon in recent decades is that formal grammar instruction does not work as a method of teaching Basic Writing. In the history of the discipline, much scholarship was written about this using a variety of methods. “But it is hard to come up with other examples of professional consensus on matters in Basic Writing, since the researchers in the field do not seem to listen much to each other or build on each others’ findings”(23). In fact, the situation is worse than a lack of consensus, which might suggest open debate. The debate occurs, but who is listening? DeGenaro and White argue: “Instead of moving toward a consensus, our researchers too often talk past each other, positions are reiterated rather than reconsidered, and we move in circles” (23).

 

DeGenaro and White use the more recent mainstreaming debate as the subject through which to study BW methodology. The difference between BW as a subdiscipline compared to others is how we must face our battles publically as we have to argue for or against certain measures that other entities propose (26). Basic Writing often needs to fight against outside forces to rally for continuance as a discipline within a school, so sometimes scholarship is focused on an immediate exigency with a non-BW audience. Other times, the writing is directed to other BW scholars, but again, who is listening?

 

The authors use the Crowley New Abolitionists debate to show that although someone will call for data, when data is presented, it is largely ignored. “To ignore data that contradicts one’s doctrines seems short-sighted. But we can only say this if we trust and value evidence provided by an experimental methodology” (26). This happens when one values philosophical inquiry over data (26-27). White refers to his own experimental research (survey results) that was used during this debate only to be openly and overtly ignored by Crowley.

 

Another method that is used more frequently in BW is practitioner results. Scholars include their own direct experiences and classroom anecdotes as evidence (28). The authors recognize that this methodology is rooted in passion for our students; however, it is not unproblematic. “We work in a knowledge-building community that consciously seeks to acknowledge the classroom as a meaningful and scholarly domain, but we risk sacrificing rigor and validity when we fail to interrogate what we mean by ‘evidence’” (28).

 

Two other types of research personae are the philosopher and the historian. Howard Tinberg is presented as an example of  a philosopher for writing a think-piece that treats student voices as equal with scholars (31). Ira Shor is used as an example of a historian for providing an overview of the issue. Because he ends his piece with a specific call to change in particular ways (rather than a general call to action), he is also labeled a progressive reformer (31-32).

 

REFLECTION

This piece summarizes some of what I have noticed about methodology in the Journal of Basic Writing. There is a dearth of quantitative research for sure. It is disappointing that one of the authors of this piece provided quantitative experimental research only to have it ignored in the mainstreaming debate. I have noticed the same “talking past” in issues related to technology use in the Basic Writing classroom. We’re so far from consensus that we’re almost tackling the issues from such disparate classroom locales that each writer must reiterate his or her position before getting to the issue. By that point, one wonders if readers write off the position as too dissimilar from their own to pay close attention to the issue.

 

The issue of practitioner research reminds me of experiences I had when working as an instructional aide in a special education program. This was during a time when ADHD was being over-diagnosed. Parents of students with ADHD would often exclaim to the special ed teachers that although they were teaching many students with ADHD, “You don’t know my student with ADHD!” The authors make it clear that practitioners may mean well when they do this, but the end result is that we value our students and our situations over the situations shared in the research. Anecdotes in research are only perceived as significant when it is our students or students who seem similar to ours, but when the anecdote is foreign to our experience, what then?

 

**Note: in working on this paper, I am developing more questions than answers.

 

MOVING FORWARD– QUESTIONS FOR MY PAPER

How to pursue meaningful data when the teacher/researchers are sometimes the most over-worked and disenfranchised?

How to do longitudinal studies at the community college level?

What data is effective data?

How to break past the narrative tradition?

(This wasn’t in this reading, but…) How do BW practitioners define a case study? Is that just a way of putting a positive spin on anecdotal experience?

Basic Writing research meme
First world problems Basic Writing meme (self-created on Memegenerator)

Entry 2

Haswell, Richard H. “Quantitative Methods in Composition Studies: An Introduction to Their Functionality” Writing Studies Research in Practice: Methods and Methodologies Ed. Lee Nickoson and Mary P. Sheridan. SIUP: Carbondale, IL, 2012: 185-196. Print.

keywords: methodology, quantitative research, data, research methods

I looked. I tried. I wanted to find something about quantitative data in Basic Writing research. While data exists for programmatic changes like the Stretch Program or Accelerated Learning Program, my searches for anything else recent and quantitative were a bust. In the book chapter I have selected, Richard Haswell explains that on the WPA listserv, there are frequent calls for data but a “paucity of replies” to such pleas: “It seems the need for quantitative research in the composition field is a crisis in itself” (186).

 

“My argument is simply that quantitative data gathering and data analysis are an everyday way that humans perceive and act; that in research procedures, they involve straightforward and useful functions; and that in outcome, they have benefits that uniquely serve practitioners and researchers” (186-187). It makes me a little bit sad that Haswell has to justify the need for quantitative methods, but he does so because he says some in composition studies are skeptical of data because numbers are the tools of the statistician, not the rhetorician. Not so, says Haswell.

 

Quantitative data has four functions, as defined by Haswell: insight, transgression, challengeability, persuasion. In insight, data mining can “see” information that cannot be easily gleaned from large amounts of data. Transgression changes “the way teachers and administrators conceive of their field, sometimes debunking myths that have prevailed for decades” (188). Challengeability is presenting enough of the methodology that the information can be confirmed or denied; quantifiable data can be challenged or replicated, unlike other types of evidence. Persuasion is the intentional use of the data to convince the audience of a need for change, etc.; how data (and even what data) is shared influences the perception of that data.

 

Inherent in this chapter is the call for more scholars to include quantitative data in their research. Haswell includes a list of practical advice for the willing.

–Take courses in statistics and research methods.

–Read quantitative studies with an eye on method.

–Hook up with a savvy researcher.

–Start your own investigation with what you want to know.

–Start small.

–Embrace rigor.

 

REFLECTION

I like numbers. I like facts. I know numbers can be manipulated, but I concur with Haswell that the presence of numbers, tables, and methodologies means the reader has the chance to observe the process of analysis and draw additional conclusions both about the accuracy of the information and its application. The lack of quantitative research in composition studies, and particularly Basic Writing, is frustrating. At some point, I want to move beyond assumption and assertion and get down to what is (praxis). One major concern that I have about including quantitative research in my own work in the future is that so little has been published. Does that mean that quantitative research is less likely to be accepted by journals or that too little of it is submitted for publication? As a field, we seem to recognize the value of it when appealing to administration but not as much when writing to one another. Is that because the numbers and charts are outside of our usual comfort zones?

 

I’d like to take up the call to do this sort of research, but I need to do Haswell’s first step before I can begin: take courses in statistics and research methods.