Lunsford and Straub set out to collect data about professors’ feedback on assignments in college English classrooms, particularly the amount and quality of comments given on drafts that will later be revised by students. To do this, they recruit twelve “readers” to read the same sampling of papers and respond as if the students were their own. The project aims to evaluate the focus of the commentary (what is the comment about?) as well as the mode of the comment (how is the comment given?) The focus is split between global and local concerns; the mode spans several different classifications, as outlined below.
Trends that Lunsford and Straub identify amongst the case studies are that their readers respond in complete sentences rather than shorthand and that their commentary focuses less on the mechanics of writing than the actual content, structure, or development of ideas. Most readers also focused on no more than three, usually global, issues when commenting. In the analytic breakdown, the most commonly recurring modes of commentary were explanatory comments (13%) and praise (12%). The average number for paper commentary was 22 comments per paper, though readers who used a full-sheet response gave more than marginal comments. . Alongside evaluative comments, the twelve readers also give reader responses that do not serve as “advice,” which Lunsford and Straub pose allows readers to temper the inherently controlling nature of typical feedback modes and positions the revision process as more of a two-way street. Overall, their findings show that readers understand that the mode of the comment affects the meaning conveyed to the student and that commentary tries to negotiate authority, control, and a degree of student empowerment. The study results indicate that the majority of commentary aims to echo James Moffett’s philosophy that writing is “someone saying something to someone else.”
- MULTIMODAL FEEDBACK. Something I found interesting was that Anson tape-records his commentary– this is something I have heard mentioned a few times recently (voice memos, video responses) and can function essentially as a mini-conference. This can also mitigate some of the tone qualifications.
- CONCISE, BUT NOT TOO CONCISE. Given the analytics of number of words in each comment, it seems readers need to hit a sweet spot of not too cryptic and not too lengthy (average of 14 words). Too little feedback is frustrating, too much becomes overwhelming. Indicators for a later conference seem to be a helpful move in this situation.
- PEER REVIEW. If any of us were planning on integrating student feedback groups in our classes, perhaps it would be helpful to display a few of these tables or a breakdown of the findings. Integrating these into a “how to give feedback” lesson for students could be not only a useful exercise for students in writing and in real life, but also could propel several journal freewrites (eg. how did your feedback fall on the scale? How did feedback you received fall on the scale? How did this feedback process go?) that might help students process through a “cooling-off” period.
- NECESSARY QUALIFICATIONS. With the question of in/directness percolating in my own surrounding environment (I can’t count how many times I’ve come across several thinkpieces, checklists, and even Gmail extensions that aim to moderate the frequency of hedging words–“just,” “feel,” “I think,” “it could,” etc etc– in women’s communication), I wondered how this study might take into account identity when thinking of “qualifying” statements, breakdown of “praise,” etc. and perceptions of authority. Does anyone else struggle with this question in giving feedback?
- ON A SCALE OF ONE TO TEN. For those of us teaching/TAing/grading, where do you think you fall most of the time in terms of the modal classifications Lunsford and Straub offer?
- TIME. What is the average time commitment you feel like you devote to giving feedback on a paper, especially a draft? What is fair?