Building an Automated Scoring System for a Single English Sentences


The KIPS Transactions:PartB , Vol. 14, No. 3, pp. 223-230, Jun. 2007
10.3745/KIPSTB.2007.14.3.223,   PDF Download:

Abstract

The purpose of developing an automated scoring system for English composition is to score the tests for writing English sentences and to give feedback on them without human's efforts. This paper presents an automated system to score English composition, whose input is a single sentence, not an essay. Dealing with a single sentence as an input has some advantages on comparing the input with the given answers by human teachers and giving detailed feedback to the test takers. The system has been developed and tested with the real test data collected through English tests given to the third grade students in junior high school. Two steps of the process are required to score a single sentence. The first process is analyzing the input sentence in order to detect possible errors,such as spelling errors,syntactic errors and so on. The second process is comparing the input sentence with the given answer to identify the differences as errors. The results produced by the system were then compared with those provided by human raters.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
J. E. Kim, K. J. Lee, K. A. Jin, "Building an Automated Scoring System for a Single English Sentences," The KIPS Transactions:PartB , vol. 14, no. 3, pp. 223-230, 2007. DOI: 10.3745/KIPSTB.2007.14.3.223.

[ACM Style]
Jee Eun Kim, Kong Joo Lee, and Kyung Ae Jin. 2007. Building an Automated Scoring System for a Single English Sentences. The KIPS Transactions:PartB , 14, 3, (2007), 223-230. DOI: 10.3745/KIPSTB.2007.14.3.223.