Editorial Policies

Focus and Scope

This journal aims to provide a forum to present and debate current research and trends in computer-based testing and wider aspects of e-assessment.

Fields of interest include, but are not limited to:

  • Automated assessment
  • Technology-assisted assessment
  • Adaptive assessment
  • Technology-related comparability studies
  • Forms and purpose of e-assessment - 'stakes'
  • New assessment techniques
  • Assessment types/item types
  • Assessment quality or service improvement studies
  • Malpractice and security
  • Accessibility

There will also be space devoted to guidance documents on: technical standards, glossary, policy papers, project reports, conferences/events and new technology reviews where relevant.

Types of article

The journal accepts and publishes five main types of manuscript, which are described below.

a) Original research (empirical) articles [Refereed only]
These are articles that report on original studies in the field of e-assessment. Typically, authors of such articles pose and seek to find answers to research questions, beginning by locating their work within the context of the literature and existing work done in the area. They include an analysis of empirical data in one or more forms (quantitative, qualitative or mixed methods), together with a discussion of the resulting implications and recommendations for practice and future research. Studies reported on will vary in scale from very large (eg nation-wide or regional programs) to very small (eg individual e-assessment interventions carried out as part of a corporate training program).

b) Development articles [Refereed only]
These articles are primarily concerned with the design and development of e-assessment systems, applications and solutions. They may, for example, detail the technological aspects associated with a novel or innovative e-assessment application, item design, or development tool. Authors must demonstrate the relevance to other e-assessment practitioners and researchers, and justify the rationale or basis for the technology in terms of established theory. They must also specify how and to what extent the system or application has been implemented, outlining the challenges faced and how these were mitigated or overcome. Ideally, a development article submitted to the journal will also include a detailed explanation of the evaluation methodology adopted, along with data/results attesting to the success or otherwise of the project.

c) Theoretical/conceptual articles [Refereed only]
These articles present non-empirical work related to e-assessment in the form of conceptual frameworks or theoretical approaches that have been devised through a comprehensive review and synthesis of the relevant theory and research literature. Their aim may be to identify key issues that need to be resolved, to understand these issues as they relate to both theory and application, to uncover the frontier of research on a problem, to relate a problem to existing theory, or to examine a conceptualised problem through the lens of previous research.

d) Position/commentary articles [Refereed or non-refereed]
Articles in this category will discuss a problem, issue or challenge of interest to e-assessment researchers and/or practitioners, in addition to presenting a suggested solution or "way forward" in terms of how the problem might be further explored or addressed. They may also include critical reflections or commentary on a particular aspect of e-assessment and its future directions. In order to be accepted for publication, a position / commentary article must thoroughly support and substantiate its position or stance with a logical argument, as well as relating this to what the theory and/or research literature reports on the problem or issue.

e) Case study articles [Refereed or non-refereed]
These articles share experiences related to the design, implementation, evaluation and/or management of e-assessment within the context of a particular organisational or institutional setting. For example, they may document how an e-assessment strategy, policy or initiative was used to address a particular challenge within a specific context. Or, they may describe the application of principled methods, theory or tools to the development of a particular e-assessment resource (eg a test item/assessment tasks) or assessment design to meet the needs of a subject, industry or group of learners. Authors of case study articles must demonstrate the relevance of the case to those outside the local situation/scenario, for example by abstracting reflections from their experience and placing them in a broader and more general context. This is particularly important to ensure that articles in this category are of value to others interested in addressing similar challenges or in undertaking future research.

In addition to the above, the journal also publishes non-refereed technical, guidance and application notes, as well as reviews of policies and books/web sites relevant to its focus and scope.


Section Policies


Checked Open Submissions Checked Indexed Checked Peer Reviewed

Open Access Policy

This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.


About Us

IJEA is an interdisciplinary journal that aims to publish expert opinion on a range of e-assessment matters, exchanging evidence-based research results and relevant industry trends as well as presenting practical experiences gained from developing and implementing technology enhanced assessment.


Editorial Board

Professor John Anderson, Managing Inspector, Department of Education, Northern Ireland (DENI)
Professor Cliff Beevers, Chairman of the e-Assessment Association, Heriot-Watt University
Andrew Boyle, Head of Assessment Research, Pedagogy and Assessment, City & Guilds
Philippa Bateman (IJEA Editor-in-Chief), Divisional Administrative Officer, Assessment Research and Development, Cambridge Assessment
Professor Geoffrey Crisp, Dean, Learning and Teaching, Royal Melbourne Institute of Technology (RMIT)
Bobby Elliott, Qualifications Manager, Scottish Qualifications Authority (SQA)
Professor Keri Facer, Professor of Education, Manchester Metropolitan University (MMU)
Dr Robert Harding, Independent Consultant
Martin Ripley, Managing Director, World Class Arena; Director, 21st Century Learning Alliance
Andreas Schleicher, Special Advisor on Education Policy to the Secretary General of the OECD
David Walker, Senior Learning Technologist, University of Dundee
Dr Bill Warburton, CAA Officer, iSolutions, University of Southampton
Professor Denise Whitelock, Institute of Educational Technology, The Open University (OU)