The Solution Overview
We transformed the Auto-Grader Feedback System from an obstacle to a student independent learning facilitator
OLD
NEW
The Problem
When students submit their programming projects, the Sail() auto-grader will test run their code and provide a score with written feedback on their submission. As shown in the Student User Flow below, they can revise their code based on the feedback and repeat the process unlimited times before the deadline.
We identified several issues with client's current text-based auto-grader
Auto-grader feedback is a crucial factor in students’ learning.
Students rely heavily on the auto-grader to understand their mistakes and improve their code. Feedback are expected to:
• Provide guidance that helps students realize their mistakes
• Suggest next steps for students to correct their mistakes
The Goal
The client originally came to us to change how the feedback is visually presented, we decided to widen the scope.
Improve upon the Sail() feedback system to support student learning through providing contextual guidance and next steps.
The Process
Background
01 | Who Are Our Clients?
Our clients are two members of the TEELLab who are currently developing the Machine Learning Applications (MLA) course that our project centers around on Sail().
Eric Keylor
Professor and researcher who creates the curriculum.
Eric cares about how the feedback contributes to the overall learning objectives of the course
Divya Prem
Engineering Team Lead on the Sail() platform.
Divya is interested in the technical implementation and user experience of the feedback system
02 | Who Are The Stakeholders?
Authors Create Sail() course content, quizzes, and project instructions.
Instructors Teach content and provide support to students.
Students Go through Sail() course content and complete projects.
Sail() users range from Carnegie Mellon to community colleges to the U.S. military, so its auto-grader feedback must support students with varying levels of technical efficacy and experience with auto-graders.
Research
Research Methods
We started with filling in the gaps of understanding on feedback on Sail(), from creation to usage.
01
We started with research questions, organizing them into questions related to students, authors, and both.
02
We then moved to finding specific research methods that would most effectively allow us to answer our research questions.
Comparative Analysis
To see how exclusively self-directed learning platforms encourage their students, we pinpointed relevant features in Leetcode, Codecademy, and Khan Academy. Since many Sail() users are CMU students, we also looked at platforms used by CMU CS courses like AutoLab, Gradescope, and Web Class to identify features CMU students might expect.
We found many unique features on various platforms that fell into feedback content, formatting, and social interactions students could have with instructors and each other.
Document Existing Team Knowledge
Team members with Teaching Assistant experience in CMU CS courses documented their assumptions, facts, and questions on the problems that students face related to text-based feedback, based on real-life experiences.
We identified gaps in our knowledge, which contributed to what we focused on during the semi-structured interviews.
Design Walkthrough of Current Interface
We signed into the Sail() platform and went through a course, learning concepts, completing projects, and receiving feedback. We then created a user journey map and identified the actions and goals that students take to complete a project.
Semi-Structured Interviews + Think Alouds
We conducted 3 one-hour interviews with course authors and 5 thirty-minute interviews with students. For the each interview, we generally had three sections:
• Introductory questions to understand their background and experience with Sail().
• Think aloud of a most recent auto-grader feedback file (for authors) or project (for students).
• Conclusion questions to reflect on how the participant would use a magic wand to fix any existing issues.
Author Interview Session
Student Interview Session
Disclaimer: The faces of the interview participants and teammates are blurred to protect their privacy.
To derive key findings, we created an affinity diagramming. We started by grouping notes from author and student interviews into findings for each stakeholder.
Author Interview Affinity Diagram
Student Interview Affinity Diagram
We combined the findings from author and student interviews to inform our finalized insights.