imgresMaybe it is the 30 years of working with challenged learners that made me abhor the over-zealous use of the infamous red pen. I’m not the only one who would argue that the red pen is a punishment and demotivational, it acts to highlight a student’s failings rather than encourage good work. New studies are now finding that students take greater offense to that common practice, still running rampant in most school settings. One observation was that use of red pen and the penalizing connotations that go along with it can be a real turn-off to students and hinder them from making revisions of their writing. Apparently computers don’t judge…, and can actually encourage students to revise written work. Recent research is showing that the impersonal feedback given by instructors is counterproductive, even punitive and surprisingly students do not seem to feel criticized by similar feedback given by a computer. There is much debate about the topic of computerized scoring and as usual Annie Murphy Paul is right on top of this controversy. Teachers, you may just be convinced to throw out all of your red pens after reading this article, please share!

140813_FUT_ChildrenonTablets.jpg.CROP.promo-mediumlarge

Why Students Prefer to Learn From a Machine

By 

Robo-readers aren’t as good as human readers—they’re better.

In April of 2012, Mark D. Shermis, then the dean of the College of Education at the University of Akron, made a striking claim: “Automated essay scoring engines” were capable of evaluating student writing just as well as human readers. Shermis’s research, presented at a meeting of the National Council on Measurement in Education, created a sensation in the world of education—among those who see such “robo-graders” as the future of assessment, and those who believe robo-graders are worse than useless. The most outspoken member of the second camp is undoubtedly Les Perelman, a former director of writing and a current research affiliate at the Massachusetts Institute of Technology. “Robo-graders do not score by understanding meaning but almost solely by use of gross measures, especially length and the presence of pretentious language,” Perelman charged in an op-ed published in the Boston Globe earlier this year. Test-takers who game the programs’ algorithms by filling pages with lots of text and using big words, Perelman contended, can inflate their scores without actually producing good writing. Perelman makes a strong case against using robo-graders for assigning grades and test scores. But there’s another use for robo-graders—a role for them to play in which, evidence suggests, they may not only be as good as humans, but better. In this role, the computer functions not as a grader but as a proofreader and basic writing tutor, providing feedback on drafts, which students then use to revise their papers before handing them in to a human. Instructors at the New Jersey Institute of Technology have been using a program called E-Rater in this fashion since 2009, and they’ve observed a striking change in student behavior as a result. Andrew Klobucar, associate professor of humanities at NJIT, notes that students almost universally resist going back over material they’ve written. But, Klobucar told Inside Higher Ed reporter Scott Jaschik, his students are willing to revise their essays, even multiple times, when their work is being reviewed by a computer and not by a human teacher. They end up writing nearly three times as many words in the course of revising as students who are not offered the services of E-Rater, and the quality of their writing improves as a result. Crucially, says Klobucar, students who feel that handing in successive drafts to an instructor wielding a red pen is “corrective, even punitive” do not seem to feel rebuked by similar feedback from a computer. A close look at one of the growing number of independent studies of automated writing feedback provides some clues as to what might be going on among NJIT students.Khaled El Ebyary of Alexandria University in Egypt and Scott Windeatt of Newcastle University in Britain published the study in the International Journal of English Studies;it looks at the effects of a robo-reader program called Criterion on the writing of education students learning to teach English as a foreign language. The students in the study received Criterion’s feedback on two drafts of essays submitted on each of four topics. The computer program appeared to transform the students’ approach to the process of receiving and acting on feedback, El Ebyary and Windeatt report. Comments and criticism from a human instructor actually had a negative effect on students’ attitudes about revision and on their willingness to write, the researchers note. By contrast, interactions with the computer produced overwhelmingly positive feelings, as well as an actual change in behavior—from “virtually never” revising, to revising and resubmitting at a rate of 100 percent. As a result of engaging in this process, the students’ writing improved; they repeated words less often, used shorter, simpler sentences, and corrected their grammar and spelling. These changes weren’t simply mechanical. Follow-up interviews with the study’s participants suggested that the computer feedback actually stimulated reflectiveness in the students—which, notably, feedback from instructors had not done. READ the full article here.

This story was produced by The Hechinger Report, a nonprofit, nonpartisan education-news outlet affiliated with Teachers College, Columbia University and Annie Murphy Paula fellow at the New America Foundation and the author of the forthcoming book Brilliant: The Science of How We Get Smarter.

Share on FacebookTweet about this on TwitterPin on PinterestShare on Google+Share on LinkedIn

One Response to Computers don’t judge…

  1. […] Computers don't judge… Maybe it is the 30 years of working with challenged learners that made me abhor the over-zealous use of the infamous red pen.  […]