Automatic and Scalable Detection of Logical Errors in Functional Programming Assignments
We present a new technique for automatically detecting logical errors in functional programming assignments. Although logical errors are the most difficult type of errors compared to syntax or type errors, detecting them remains largely as a manual process that requires hand-made test cases. However, designing proper test cases is nontrivial and involves a lot of human efforts. Furthermore, manual test cases are unlikely to catch diverse errors because instructors cannot predict all corner cases of diverse student submissions. We aim to reduce this burden by automatically generating test cases for functional programs. Given a reference program from an instructor and a student's submission, our technique generates a counter-example that captures the semantic difference of the input programs without any manual effort. The key novelty behind our approach is the test case synthesis algorithm that combines enumerative search and symbolic verification in a tightly coupled way, which is in particular effective for detecting corner-case errors. The experimental results show that our technique is more effective than existing approaches. It detected 88 more errors that were missed by carefully designed test cases, and performed better than existing property-based testing techniques. We also demonstrate the usefulness of our technique in the context of test-case-based program repair, where it effectively helps to eliminate test-case-overfitted patches.