Comparative Study of Different Techniques for Automatic Evaluation of English Text Essays

Authors

  • Amerra J. Ali Mustansiriyah University, College of Science, Department of Computer Science, Baghdad
  • Abdul Monem S. Rahma Al-Maarif University College, Department of Computer Science, Baghdad
  • Narjis M. Shati Mustansiriyah University, College of Science, Department of Computer Science, Baghdad
  • Boshra F. Zopon Mustansiriyah University, College of Science, Department of Computer Science, Baghdad

Abstract

 Automated essay evaluation keeps to attract a lot of interest because of its educational and commercial importance as well as the related research challenges in the natural language processing field. Automated essay evaluation has the feature of halves, less cost of human resource, and gives the results directly and timely feedback compared with the human evaluator which requires more time and it depends on his /her mood at certain times. This paper has focused on automated evaluation of English text which was performed using various algorithms and techniques by making comparison between these techniques that applied with different size of dataset and length essays as well as the performance of algorithms was assessed using different metrics. The results uncovered that the performance of each technique has affected by the size of dataset and the length of essays. Finally, for future research directions building a standard dataset containing different types of question-answer pair to be able to compare the performance of different techniques fairly.

References

S. Ramesh, S. M. Sidhu, and G. K. Watugala, “Exploring the potential of multiple choice questions in computer-based assessment of student learning,” Malaysian Online Journal of Instructional Technology, vol.2, no. 1, April 2005.

S. Merritt, “Mastering Multiple Choice: The Definitive Guide to Better Grades on Multiple Choice Exams”, 6th ed., Canada: Brain Ranch, 2006.

M. Alomran and D. Chai, “Automated Scoring System for Multiple Choice Test with Quick Feedback”, International Journal of Information and Education Technology, Vol. 8, No. 8, August 2018.

S. Valenti, F. Neri and A. Cucchiarelli, “An Overview of Current Research on Automated Essay Grading”, Journal of Information Technology Education Volume 2, 2003.

B. Lemaire and P. Dessus, “A System To Assess The Semantic Content Of Student Essays”, J. Educational Computing Research, Vol. 24(3) 305-320,University of Grenoble-II, 2001.

D. Kanejiya, A. Kumary and S. Prasad, “Automatic Evaluation of Students’ Answers using Syntactically Enhanced LSA”, Department of Electrical Engineering Centre for Applied Research in Electronics Indian Institute of Technology New Delhi 110016 INDIA, 2003.

H. Chen, B. He, T. Luo, and B. Li, “A Ranked-based Learning Approach To Automated Essay Scoring”, Second International Conference on Cloud and Green Computing, 2012.

T. Qin, and X. Zhang, and M. Tsai, D. Wang, T. Liu and, H. Li, “Query-level loss functions for information retrieval”. Information Process and Management: an International Journal, v.44 n.2 p.838-855, March, 2008.

K. Taghipour and H. T. Ng, “A Neural Approach to Automated Essay Scoring”, 10.18653/v1/D16-1193, 2016.

S. A. Crossley, L. K. Allen, E. L. Snow, and D. S. McNamara, “Incorporating learning characteristics into automatic essay scoring models: What individual differences and linguistic features tell us about writing quality”, Journal of Educational Data Mining, Volume 8, No 2, 2016.

V. V.Ramalingam , APandian, P. Chetry and H. Nigam, “Automated Essay Grading using Machine Learning Algorithm”, National Conference on Mathematical Techniques and its Applications, 2018.

E. Clark, A. Celikyilmaz, and N. A. Smith,“Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts , Paul G. Allen School of Computer Science & Engineering, University of Washington, 2019.

N. zen, A. N. Gorbana , J. Levesleya, E. M. Mirkesa, “Automatic short answer grading and feedback using text mining methods”, Published by Elsevier B.V, 2020.

J. S. Tan and I. K. T. Tan, “Feature Group Importance for Automated Essay Scoring”, 14th International Conference, MIWAI 2021 Virtual Event, 2021.

P. W. Foltz, W. Kintsch and T. K. Landauer. “The measurement of textual coherence with Latent Semantic Analysis”. Discourse Processes, 25, 2&3, 285-307. Latent Semantic Analysis. Discourse Processes, 25, 259-284, 1998.

S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshmann, “Indexing by Latent Semantic Analysis”, Journal of the American Society for Information Science, 41, pp. 391-407, 1990.

T. K. Landauer, P. W. Foltz, and D. Laham, “An introduction to latent semantic analysis”. Discourse Processes, 25(2-3), 259–284, 1998.

A. Jain, G. Kulkarni, V. Shah, “Natural Language Processing”, International Journal of Computer Sciences and Engineering, 2018.

Downloads

Published

2023-01-06

How to Cite

Amerra J. Ali, Abdul Monem S. Rahma, Narjis M. Shati, & Boshra F. Zopon. (2023). Comparative Study of Different Techniques for Automatic Evaluation of English Text Essays. American Scientific Research Journal for Engineering, Technology, and Sciences, 91(1), 1–8. Retrieved from https://www.asrjetsjournal.org/index.php/American_Scientific_Journal/article/view/7521

Issue

Section

Articles