EXPLORING AI-DRIVEN PROGRAMMING EXERCISE GENERATION

Reference Text
Proceedings of the 20th International CDIO Conference, ESPRIT, Tunis, Tunisia, June 10-13 2024
Year
2024
Abstract

Large language models(LLMs) are transforming how teachers work. In this paper, we observe several experimental approaches to generating software programming exercises by utilizing ChatGPT, a popular and open LLM. The generation of these exercises was tightly connected to a large Python programming course that was targeted at students studying in Information Technology, Software Engineering, and Computing.

We experimented with three separate approaches. In the first one, we generated new programming exercises with a specific topic using theme injection. In the second one, we generated variations of existing programming exercises by changing the theme or content. In the third one, we generated hybrid exercises by injecting original programming exercises with additional topics or other related exercises.

Based on our results, all three approaches showed potential but also revealed limitations. The exercise generation with theme injection can produce fully functional exercises. However, these exercises could appear to students as too generic or erroneous. The exercise variations seem to retain the semantic meaning of the original exercise quite well while still using different context. We also tested the variations in a large introductory programming course and found out that the students could not distinguish them from human-generated exercises in style or quality. The hybrid exercises were built upon the idea of exploring how close we are to fully adaptive learning environments in the field of programming education. The current results of this approach show that we need to do further experimentation to maybe reach the goal.

All in all, it was evident that LLMs can be a useful tool in assisting teachers in generating exercises. Even with certain coherent limitations, they are useful in particular cases. We conclude our article by discussing the future possibilities of LLMs, including but not limited to dynamic, automatically generated exercises and fully adaptive learning environments.