Composition evaluation as an unfolding process
Gardiner, Thomas Clifford
MetadataShow full item record
Holistic scoring of single-sample, timed and unassisted extemporaneous writing remains the prevailing method for large-scale, high-stakes measurements of writing quality. The validity of such holistic scoring has been challenged on grounds that raters read hastily and superficially. Their judgments are thus prematurely determined by their first impressions of student essays, rather than by thorough perusal. Indeed, the process of holistic assessment is of interest as an instance of evaluative reading wherein an overall impression unfolds as readers confront each successive element of text. Three interrelated studies were conducted to investigate judgmental processes of holistic raters as they evaluated individual essays or portfolios of student writing in real time. The studies addressed questions of how early text elements function as contextual or priming mechanisms to affect overall evaluations of essays or portfolios. Study 1 investigated judgments of raters evaluating essays in the naturalistic setting of a statewide writing competency test. Ratings of experimentally manipulated essays revealed that impressions of quality engendered by high or low error density in the first half of an essay are rarely modified by subsequent changes in writing quality. Employing computer-assisted data collection, Study 2 investigated unfolding evaluations as raters read essays into which elements of either sophisticated or infelicitous writing had been intruded at specific junctures. The results generally indicated that ratings were higher when infelicities appeared late rather than early in essays. Portfolios of student writing have been widely proposed as an alternative to singlesample assessments. The results of Study 3 indicated that raters were in fact able to suspend first impressions and rather accurately average the quality of portfolio components into an overall score. The experimental portfolios utilized were, however, more standardized than is typical in portfolio assessments. In short, raters in large-scale essay assessments were susceptible to their first impressions. In a well-controlled portfolio assessment, however, raters were able to withhold judgment of the whole until they have weighed the quality of each constituent essay.