Keywords: methodology, basic writing, practitioner results, data, consensus
The premise of this article is that Basic Writing practitioners need to be more cohesive with their methodologies and present more of a united front as a subdiscipline. The one issue that BW scholars have been able to agree upon in recent decades is that formal grammar instruction does not work as a method of teaching Basic Writing. In the history of the discipline, much scholarship was written about this using a variety of methods. “But it is hard to come up with other examples of professional consensus on matters in Basic Writing, since the researchers in the field do not seem to listen much to each other or build on each others’ findings”(23). In fact, the situation is worse than a lack of consensus, which might suggest open debate. The debate occurs, but who is listening? DeGenaro and White argue: “Instead of moving toward a consensus, our researchers too often talk past each other, positions are reiterated rather than reconsidered, and we move in circles” (23).
DeGenaro and White use the more recent mainstreaming debate as the subject through which to study BW methodology. The difference between BW as a subdiscipline compared to others is how we must face our battles publically as we have to argue for or against certain measures that other entities propose (26). Basic Writing often needs to fight against outside forces to rally for continuance as a discipline within a school, so sometimes scholarship is focused on an immediate exigency with a non-BW audience. Other times, the writing is directed to other BW scholars, but again, who is listening?
The authors use the Crowley New Abolitionists debate to show that although someone will call for data, when data is presented, it is largely ignored. “To ignore data that contradicts one’s doctrines seems short-sighted. But we can only say this if we trust and value evidence provided by an experimental methodology” (26). This happens when one values philosophical inquiry over data (26-27). White refers to his own experimental research (survey results) that was used during this debate only to be openly and overtly ignored by Crowley.
Another method that is used more frequently in BW is practitioner results. Scholars include their own direct experiences and classroom anecdotes as evidence (28). The authors recognize that this methodology is rooted in passion for our students; however, it is not unproblematic. “We work in a knowledge-building community that consciously seeks to acknowledge the classroom as a meaningful and scholarly domain, but we risk sacrificing rigor and validity when we fail to interrogate what we mean by ‘evidence’” (28).
Two other types of research personae are the philosopher and the historian. Howard Tinberg is presented as an example of a philosopher for writing a think-piece that treats student voices as equal with scholars (31). Ira Shor is used as an example of a historian for providing an overview of the issue. Because he ends his piece with a specific call to change in particular ways (rather than a general call to action), he is also labeled a progressive reformer (31-32).
This piece summarizes some of what I have noticed about methodology in the Journal of Basic Writing. There is a dearth of quantitative research for sure. It is disappointing that one of the authors of this piece provided quantitative experimental research only to have it ignored in the mainstreaming debate. I have noticed the same “talking past” in issues related to technology use in the Basic Writing classroom. We’re so far from consensus that we’re almost tackling the issues from such disparate classroom locales that each writer must reiterate his or her position before getting to the issue. By that point, one wonders if readers write off the position as too dissimilar from their own to pay close attention to the issue.
The issue of practitioner research reminds me of experiences I had when working as an instructional aide in a special education program. This was during a time when ADHD was being over-diagnosed. Parents of students with ADHD would often exclaim to the special ed teachers that although they were teaching many students with ADHD, “You don’t know my student with ADHD!” The authors make it clear that practitioners may mean well when they do this, but the end result is that we value our students and our situations over the situations shared in the research. Anecdotes in research are only perceived as significant when it is our students or students who seem similar to ours, but when the anecdote is foreign to our experience, what then?
**Note: in working on this paper, I am developing more questions than answers.
MOVING FORWARD– QUESTIONS FOR MY PAPER
How to pursue meaningful data when the teacher/researchers are sometimes the most over-worked and disenfranchised?
How to do longitudinal studies at the community college level?
What data is effective data?
How to break past the narrative tradition?
(This wasn’t in this reading, but…) How do BW practitioners define a case study? Is that just a way of putting a positive spin on anecdotal experience?
Haswell, Richard H. “Quantitative Methods in Composition Studies: An Introduction to Their Functionality” Writing Studies Research in Practice: Methods and Methodologies Ed. Lee Nickoson and Mary P. Sheridan. SIUP: Carbondale, IL, 2012: 185-196. Print.
keywords: methodology, quantitative research, data, research methods
I looked. I tried. I wanted to find something about quantitative data in Basic Writing research. While data exists for programmatic changes like the Stretch Program or Accelerated Learning Program, my searches for anything else recent and quantitative were a bust. In the book chapter I have selected, Richard Haswell explains that on the WPA listserv, there are frequent calls for data but a “paucity of replies” to such pleas: “It seems the need for quantitative research in the composition field is a crisis in itself” (186).
“My argument is simply that quantitative data gathering and data analysis are an everyday way that humans perceive and act; that in research procedures, they involve straightforward and useful functions; and that in outcome, they have benefits that uniquely serve practitioners and researchers” (186-187). It makes me a little bit sad that Haswell has to justify the need for quantitative methods, but he does so because he says some in composition studies are skeptical of data because numbers are the tools of the statistician, not the rhetorician. Not so, says Haswell.
Quantitative data has four functions, as defined by Haswell: insight, transgression, challengeability, persuasion. In insight, data mining can “see” information that cannot be easily gleaned from large amounts of data. Transgression changes “the way teachers and administrators conceive of their field, sometimes debunking myths that have prevailed for decades” (188). Challengeability is presenting enough of the methodology that the information can be confirmed or denied; quantifiable data can be challenged or replicated, unlike other types of evidence. Persuasion is the intentional use of the data to convince the audience of a need for change, etc.; how data (and even what data) is shared influences the perception of that data.
Inherent in this chapter is the call for more scholars to include quantitative data in their research. Haswell includes a list of practical advice for the willing.
–Take courses in statistics and research methods.
–Read quantitative studies with an eye on method.
–Hook up with a savvy researcher.
–Start your own investigation with what you want to know.
I like numbers. I like facts. I know numbers can be manipulated, but I concur with Haswell that the presence of numbers, tables, and methodologies means the reader has the chance to observe the process of analysis and draw additional conclusions both about the accuracy of the information and its application. The lack of quantitative research in composition studies, and particularly Basic Writing, is frustrating. At some point, I want to move beyond assumption and assertion and get down to what is (praxis). One major concern that I have about including quantitative research in my own work in the future is that so little has been published. Does that mean that quantitative research is less likely to be accepted by journals or that too little of it is submitted for publication? As a field, we seem to recognize the value of it when appealing to administration but not as much when writing to one another. Is that because the numbers and charts are outside of our usual comfort zones?
I’d like to take up the call to do this sort of research, but I need to do Haswell’s first step before I can begin: take courses in statistics and research methods.