Pilot project: AI supports lecturers in marking handwritten performance assessments
Thanks to the STIFT+ pilot project, lecturers can mark handwritten performance assessments using AI. This makes this examination type attractive even for large courses. The most important questions and answers about the project.
Why the pilot project?
The number of students in the first year of studies has risen sharply. Marking hundreds of handwritten performance assessments manually is very time-consuming. That is why many lecturers now offer their performance assessments in multiple-choice format, as these can be marked by machine. With the help of AI-supported marking, lecturers can also offer handwritten performance assessments in large courses. These give students more freedom for derivations, proofs, sketches and comprehensible lines of argumentation. They can demonstrate their knowledge more comprehensively because, in addition to the result, the solution process is also marked.
Can AI really correct performance assessments?
AI-assisted marking of handwritten performance assessments is suitable for STEM subjects (Computer Science, Natural Sciences, Engineering and Mathematics) in the first year of studies. These are exact sciences with clear, easily comprehensible rules. Current AI models can solve these tasks and thus also score the corresponding performance assessment answers.
How does AI-assisted marking work?
After the students have written the performance assessments on paper, they are scanned and digitised on site or later at the respective institute. The original paper copies are archived. The AI system then suggests a certain number of points per sub-task. It bases this on an assessment rubric in which the lecturers have previously specified how many points they want to award for which answer elements. The examiners carry out random checks in the online system to monitor the quality of the assessment. The performance assessments are then made available to students for review. They can veto the assessments within a certain time frame (two weeks is recommended). If they do so, the points assigned by the AI for a specific part of the task are rejected and the performance assessment is returned to the examiners. They then mark the disputed parts of the task by hand.
How is it ensured that the AI is not wrong in its assessment?
The awarding of points is transparent, and students have the opportunity to question an assessment at any time and discuss it with student teaching assistants or lecturers. The software used to manage AI-supported marking of handwritten performance assessments is open source and available for viewing by anyone interested.
How was the system tested?
The developed system was tested in five performance assessments in mathematics, thermodynamics and Chemistry with around 1,000 students. The performance assessments were marked by both the lecturers and their assistants as well as by the AI. The assessments were largely consistent. Pursuant to these results, the Rectorate decided to proceed with the pilot project.
What happens to the exam data?
All data is processed on servers in Europe and stored on ETH's own servers. The scanned performance assessments do not contain student names, but numbers that can only be assigned to students internally at ETH. It has been contractually agreed that the processed data will not be used to train the AI models.
What is the legal basis for AI-supported marking?
Marking continues to be carried out by lecturers in accordance with the legal basis. The AI merely performs the conferral of points for the answers and adds them up. The project workflow has been reviewed and meets ETH Zurich's data protection and security requirements.
What are the advantages of AI-assisted marking for students?
With the help of the system, hundreds of exams can be marked in just a few hours. If lecturers release the exams quickly, students can view their preliminary results the day after the exam. Whereas the viewing of examinations after the fact has often been a laborious process in the past, AI-assisted assessment makes it the norm.
AI-assisted exam assessment with the STIFT+ project
Starting in the spring semester of 2026, the Ethel team will support lecturers in marking open, handwritten performance assessments with the help of AI. Together, they will develop task layouts, rubrics, assessment schemes and parts of the performance assessment that are subject to veto. The technical effort is worthwhile for around 100 performance assessments or more.
Further information is available on the project website.
Questions can be answered by
The project is financed by the Rector's Impulse Fund from a donation by Adrian Weiss.
Note on the translation
This text has been translated for your convenience using a machine translation tool. Although reasonable efforts have been made to provide an accurate translation, it may not be perfect. If in doubt, please refer to the German version.
Should you come upon significant translation mistakes, please send a short message to so that we can correct them. Thank you very much.
Always up to date
Would you like to always receive the most important internal information and news from ETH Zurich? Then subscribe to the "internal news" newsletter and visit Staffnet, the information portal for ETH employees.