Exam-Grading Assistance
Project STIFT+: Making meaningful assessment possible at scale
Last updated February 12, 2026.
When assessing learning in many mathematical, engineering, and scientific disciplines, how students arrive at a solution matters: derivations, proofs, sketches, and reasoning steps. Here, paper-based assessments have the edge, but up to now lack the scalability of online assessments
Starting Spring Semester 2026, the Ethel team supports departments in scoring open-ended, handwritten exams with AI - with human oversight. Via Download STIFT+ (PDF, 1.1 MB), funding is available to help departments get this project off to a safe and successful start.
In most cases, yes, as many question types are supported. The key is authoring and layout: what looks like a small formatting choice can determine whether the workflow is fast and reliable or requires extensive manual intervention. See our Download author guidelines (PDF, 6.2 MB), which also include examples for worksheets and rubrics.
Please coordinate with us early as you design your exam so we can help you avoid avoidable rework.
Eligible applicants are responsible examiners for official ETH Zurich exams. Initially, priority will be given to large-enrollment first-year courses.
Please note: this is an R&D collaboration, not a full-service “we take over grading” solution. We expect the course team to partner with us on exam design, rubric alignment, and review. Success depends on shared ownership.
This is an official project of the Rectorate of ETH Zurich. The workflow has been reviewed to meet ETH Zurich’s privacy and data security requirements.
For scoring as part of the normal examination process, no separate student opt-in or opt-out is required.
AI-supported workflows at ETH Zurich may assist with assigning points, but grades are always assigned by the responsible examiner.
If you or your team would like to conduct an additional research study in connection with the exam, this is handled separately as human-subject research. We can support you in the application process. Any use of exam data for research purposes (beyond grading) is strictly opt-in and cannot affect grades.
- Co-design the exam Download (author guidelines) (PDF, 6.2 MB). Together we align question layout, rubrics, and “problem parts” (the units students can later challenge), so the workflow stays efficient and reliable.
- Prepare and print exam sheets with identifiers. Each sheet is printed with a unique identifier (for matching scans to candidates). A seating plan with assigned exam numbers (numbers posted by room/table) is strongly recommended to reduce mix-ups and logistics on exam day.
- Students write the exam on paper.
- Scan all exams; archive the paper originals. Exams are scanned and the paper is archived. Scanning typically happens on departmental ETH printers/scanners and can take time - plan staff and buffer accordingly.
- AI generates point allocations (per problem part). The system proposes points for each part (AI-assisted scoring), creating a first pass that staff can review.
- Course staff review and intervene as needed (quality control). In an online system, the course team can do spot checks, focus on flagged cases, and correct anything that needs correction before releasing results to students.
- Open a student review window (suggested: ~2 weeks). For a defined period, students can view their scanned exam and the per-part point allocations in the online system.
- Students can submit a veto per problem part. If a student files a veto for a part, the AI-assigned points for that part are discarded and the part is routed for human scoring.
- Manual scoring only where a veto was filed. Course staff score only the vetoed parts by hand in the online system; everything else remains as previously confirmed.
- Close the veto window. After the window closes, exams can optionally remain available view-only for longer.
- Final grades are assigned by the examiner.
Related publications
- external page Grading assistance for a handwritten thermodynamics exam using artificial intelligence: An exploratory study
- external page Assessing confidence in AI-assisted grading of physics exams through psychometrics: An exploratory study
- external page Artificial-Intelligence Grading Assistance for Handwritten Components of a Calculus Exam
- external page Assisting the grading of a handwritten general chemistry exam with artificial intelligence
Contact Gerd Kortemeyer, , for more information.