Technology-enhanced assessment: Agency change in the educational eco-system

Guest Editors


Marco Kalz, Open University of the Netherlands, The Netherlands

Eric Ras, Luxembourg Institute of Science and Technology, Luxembourg

Denise Whitelock. The Open University, UK


Important dates


• Paper Submission Deadline: 15 May 2015 -> 25 may 2015
• Notification to the authors: 20 June 2015 -> 15 July 2015
• Camera ready paper: 20 July 2015 -> 30 July 2015
• Publication of the issue: end of August 2015




Today, learning often occurs collaboratively in learner networks, formal learning is combined with informal learning, and learners use, for example, personalised and personal learning environments adapted to their needs and preferences. While our learning environments have progressed with the help of technology, assessment practices are often reproducing traditional power relations and do not provide more control for the learner enabling self-directed learning. While in the literature often an ‘agency change’ in the educational eco-system is proposed, in practice this agency change is often not happening. While many e-assessment technologies are still rooted in an old testing paradigm triggered by the institution or the teacher, new approaches need to strife for an agency change towards the learners as the trigger of feedback and assessment processes need to enable self-regulated learning.

Modern and innovative technologies for technology-enhanced assessment show one or more of the following aspects:

1. Flexible timing: Future assessment and feedback needs to be available when needed by the learner and must avoid disturbing the learner in the learning process. Furthermore, timing depends on the learner’s characteristics (e.g., performance level, goal orientation level) and the complexity of the task.


2. Automation: To avoid an overload of teachers and learners automation is important. Automation can happen at design time of the assessment, during run-time (i.e., solving the test item including feedback mechanism, during scoring, or even after the feedback has been provided. Scoring is meant to be the evaluation of the student’s answer to an assessment item whereas the last category of automation refer to identifying the utilisation of feedback.


3. Adaptivity/Adaptability: Assessment and feedback needs to be adaptive towards the individual and his state of knowledge and other preferences.  Adaptability means that the personalisation are controlled and steered by the user (i.e., user-driven). Adaptivity means that the system controls the personalisation (i.e., system-driven).


4. Data triangulation: Scoring and rich feedback need to combine data from different sources.


5. Continuity and dialogue: Feedback and assessment needs to be a continuous process and not restricted to ongoing courses or the schedule of the study year. A continuous dialogue between teachers, learner, peers and systems is essential.


This special issue will focus on innovative technologies with the potential to contribute to the agency change in the educational eco-system. The issue will bring together contributions in TEL that deal with approaches and innovative assessment technologies that support the transition from current assessment scenarios towards the development of novel forms of e-assessment through which different types of knowledge and skills are evaluated, continuous feedback is provided, and students are more engaged in the learning process. Contributions are expected in the area of TEL from different fields (technology-based assessment, educational measurement, IT&TEL, pedagogy, teacher education, educational psychology, etc.), which provide insights into how future assessment could enhance motivation and learning in TEL environments.

Topics of Interests


Include but are not limited to the following topics: 


• formative assessment in adaptive systems

• formative assessment for users with special needs

• formative assessment for 21st Century skills

• mobile assessment 

• feedback technologies

• integrated e-assessment, embedded assessment

• location-based/context aware educational feedback

• (automated) item design and generation

• automated analysis of open answers

• alignment of formative and summative feedback

• peer-assessment

• learning analytics for assessment purposes

• standard-conform e-assessment, flexible e-assessment, interoperable e-assessment

• e-assessment in complex learning - i.e. collaborative learning, serious games, 3D worlds and digital stories, discussion forums

• learning analytics and assessment

• assessment rubrics

• new interaction modalities for assessment and feedback 


The target audience for this special issue is broad and includes educational researchers, psychometricians, computer scientists but also experts from other domains that focus on technology-enhanced assessment.




Formative assessment, eAssessment, technology-enhanced assessment, feedback loops, self-directed learning, agency change.


Submission procedure 


All submissions (abstracts and later final manuscripts) must be original and may not be under review by another publication.

The manuscripts should be submitted anonymized either in .doc or in .rtf format. 
All papers will be blindly peer-reviewed by at least two reviewers. Perspective participants are invited to submit a 8-14 pages paper (including authors' information, abstract, all tables, figures, references, etc.). 
The paper should be written according to the IxD&A authors' guidelines .

More information on the submission procedure and on the characteristics of the paper format can be found on the website of IxD&A Journal the where information on the copyright policy and responsibility of authors, publication ethics and malpractice are published.

For scientific advices and for any query please contact the guest-editor:

• marco[dot]kalz [at] ou [dot] nl

• eric[dot]ras[at] list [dot] lu

• denise[dot]whitelock [at] open [dot] ac [dot] uk


marking the subject as: 'IxD&A focus section on: 'Technology Enhanced Assessment'.