Abstract:
Database is the core course in the study of Computer Science and Engineering. In the undergrad- uate level, one of the major topics for database is SQL. SQL-LES is a Problem-Based e-Learning (PBeL) system that has been being used in learning and teaching of SQL in undergraduate level. In SQL-LES, students submit SQL answers in online examinations and assignments. This type of systems with auto-evaluation feature typically perform evaluation by comparing the result-sets returned by the answered SQL expression and the correct expression stored in the system for the respective problem. This approach of evaluation based on result-set comparison gives full marks when the results match and zero otherwise. This creates a frustration to the students whose answers are almost correct and is evaluated to zero grade.
In this report, we introduce a model for evaluating the partially-correct SQL answers. The key idea is to calculate a score based on the syntactic similarity between the answered SQL ex- pression and the correct SQL expression for the respective problem. In many cases there can be more than one correct SQL expression. This issue is addressed by calculating a score with re- spect to every correct expression and assigning the maximum score to the answer. As evaluation based on comparison of result-set ensures full scores to SQL answers that are completely correct, this approach can be utilized to filter out the correct SQL answers and after filtering them out, the partial-evaluation model can be applied to score only the answers which are not completely correct.
We experimented our model for different real life data sets obtained from database practical course. In these data-sets, we obtained the human score by database teachers. We have compared the score generated by the model and human score. The comparison result is found to be quite satisfactory.