Assessment item generation, the way forward


  • Muriel Foulonneau
  • Eric Ras


Automatic item generation (AIG) supports in particular computerised adaptive testing, although it is now used in other contexts, in particular to extract items from learning texts or domain models. Current template-based AIG approaches initially focused on a limited number of domains in which quantitative variables were resolved with limited risks of errors. Generalising AIG approaches as a mainstream technology for diagnostic, summative and even for formative assessment requires processing a richer set of variables. Such variables define the variable parts of test items, for instance, in the stem, options or auxiliary information. This paper discusses current approaches to this problem which includes adding multimedia variables to items, adding variables for feedback elements as well as defining new scalable models for measuring the quality of the generated items. Open Educational Resources and semantic models published on the web represent potential sources for generating those variables. The topics discussed in this paper (including concrete methods, techniques, tools) can support the transformation of the AIG field with the generation of a wider range of items and more standardised processes.