Yesterday, I posted about an article published in Inside Higher Ed titled “A Win for Robo-Readers.” The article covered the findings of a study from the University of Akron, which showed that automated grading software can be used to evaluate writing for grammar and syntax.
I was a bit concerned about some of the comments in the article, but I’m actually very excited about the potential of Robo-Readers for the writing classroom. But I wouldn’t rely on them necessarily as a grading tool. I want my students to use them as a learning tool. My hope is that eventually, Robo-Readers might be able to evaluate students’ writing for them as they work.
There’s been a lot of buzz lately on my RSS feed about automated essay-grading software, or “Robo-Readers.” Several weeks ago, Inside Higher Ed published an article announcing “A Win for the Robo-Readers.” The article very briefly presented and commented on the results of a study conducted by the University of Akron, which found that Robo-Readers can effectively evaluate student writing on a quantitative level. In other words, they can determine whether sentences follow proper grammar and mechanical rules.
The author of the Akron study and dean of the Akron college of Ed., Mark D. Shermis, admits that comp classes shouldn’t completely rely upon the software for grading. They should be used as a supplement to grading, to ease workloads.
I agreed with the article and with Sermis’ statements about the Robo-Reader tools until the very end of the piece. And then I saw something that I found rather troubling:
The Akron education dean acknowledges that AES software has not yet been able to replicate human intuition when it comes to identifying creativity. But while fostering original, nuanced expression is a good goal for a creative writing instructor, many instructors might settle for an easier way to make sure their students know how to write direct, effective sentences and paragraphs.
“If you go to a business school or an engineering school, they’re not looking for creative writers,” Shermis says. “They’re looking for people who can communicate ideas. And that’s what the technology is best at” evaluating.