Role:

UI Designer
UX Researcher
Tools:  

Figma
Photoshop
HTML
Context:

January 2024 - May 2024
Pittsburgh, PA
Team:  

Lela Yuan, Erica Fu, Rena Li, Auli Shen, Nellie Tonev,

At-a-glance:  

The Sail() platform, developed by the Technology for Effective and Efficient Learning (TEEL) Lab at Carnegie Mellon University, is an online computer science course platform used by more than 10,000 students from 47 universities and colleges. We designed their auto-grader feedback page to personalize learning experience for students of diverse background and expertise.

  The Solution Overview

We transformed the Auto-Grader Feedback System from an obstacle to a student independent learning facilitator
OLD
NEW

  The Problem

When students submit their programming projects, the Sail() auto-grader will test run their code and provide a score with written feedback on their submission. As shown in the Student User Flow below, they can revise their code based on the feedback and repeat the process unlimited times before the deadline.
We identified several issues with client's current text-based auto-grader
Auto-grader feedback is a crucial factor in students’ learning.
Students rely heavily on the auto-grader to understand their mistakes and improve their code. Feedback are expected to:

Provide guidance that helps students realize their mistakes

Suggest next steps for students to correct their mistakes

  The Goal

The client originally came to us to change how the feedback is visually presented, we decided to widen the scope.
Improve upon the Sail() feedback system to support student learning through providing contextual guidance and next steps.

  The Process

DiscoverDefineDesignDeliver

Discover

  Background

01 |   Who Are Our Clients?
Our clients are two members of the TEELLab who are currently developing the Machine Learning Applications (MLA) course that our project centers around on Sail().
Eric Keylor
Professor and researcher who creates the curriculum.

Eric cares about how the feedback  contributes to the overall learning objectives of the course
Divya Prem
Engineering Team Lead on the Sail() platform.

Divya is interested in the technical implementation and user experience of the feedback system
02 |   Who Are The Stakeholders?
 Authors  Create Sail() course content, quizzes, and project instructions.
 Instructors  Teach content and provide support to students.
 Students  Go through Sail() course content and complete projects.

Sail() users range from Carnegie Mellon to community colleges to the U.S. military, so its auto-grader feedback must support students with varying levels of technical efficacy and experience with auto-graders.

  Research

Research Methods
We started with filling in the gaps of understanding on feedback on Sail(), from creation to usage.
01
We started with research questions, organizing them into questions related to students, authors, and both.
02
We then moved to finding specific research methods that would most effectively allow us to answer our research questions.
Comparative Analysis
To see how exclusively self-directed learning platforms encourage their students, we pinpointed relevant features in Leetcode, Codecademy, and Khan Academy. Since many Sail() users are CMU students, we also looked at platforms used by CMU CS courses like AutoLab, Gradescope, and Web Class to identify features CMU students might expect.

We found many unique features on various platforms that fell into feedback content, formatting, and social interactions students could have with instructors and each other.
Document Existing Team Knowledge
Team members with Teaching Assistant experience in CMU CS courses documented their assumptions, facts, and questions on the problems that students face related to text-based feedback, based on real-life experiences.

We identified gaps in our knowledge, which contributed to what we focused on during the semi-structured interviews.
Design Walkthrough of Current Interface
We signed into the Sail() platform and went through a course, learning concepts, completing projects, and receiving feedback. We then created a user journey map and identified the actions and goals that students take to complete a project.
Semi-Structured Interviews + Think Alouds
We conducted 3 one-hour interviews with course authors and 5 thirty-minute interviews with students. For the each interview, we generally had three sections:

• Introductory questions to understand their background and experience with Sail().
• Think aloud of a most recent auto-grader feedback file (for authors) or project (for students).
• Conclusion questions to reflect on how the participant would use a magic wand to fix any existing issues.
Author Interview Session
Student Interview Session
Disclaimer: The faces of the interview participants and teammates are blurred to protect their privacy.
To derive key findings, we created an affinity diagramming. We started by grouping notes from author and student interviews into findings for each stakeholder.
Author Interview Affinity Diagram
Student Interview Affinity Diagram
We combined the findings from author and student interviews to inform our finalized insights.

Define

  Research Insights

01 |   Insights Overview
We synthesized our findings from the affinity diagram and identified four key insights that capture the most important principles for improving the creation, personalization, and effectiveness of auto-grader feedback
Insight 1
Creating clear guidelines will help authors more quickly create feedback that uses best practices and is flexible to different courses
Insight 2
Content of auto-grader feedback should be personalized based on each student’s level of expertise
Insight 3
It is motivating for students to have explicit guidance on what to do next after receiving auto-grader feedback.
Insight 4
Feedback needs to simultaneously be detailed enough for students to debug their code while also not being overwhelming, as to manage students’ mental loads
01 |   How Are Insights Connected?
Creating steps and guidelines for authors will help authors create effective auto-grader feedback for the students: feedback that is personalized, motivating, and detailed but not overwhelming.

Develop

 Low-Fidelity: Testing Ideas

Ideation | Crazy 8s
We let the prototyping questions to guide our ideation, and completed "Crazy 8" that generated more than 80 ideas. This activity pushed the boundaries of our comfort zone and explore out-of-the-box ideas.

After grouping the ideas, key themes emerged: author-focused, student-focused, visual formatting, and feedback content. The top-voted ideas by the team were further developed into the foundations of our storyboards.
Idea Validation | Storyboards and Speed Dating
From our Crazy 8’s synthesis, we selected 7 storyboards that sketched situations that exemplified our design concepts. We then ran 11 Speed Dating sessions where authors and students reacted to each storyboard to validate our ideas.  

The storyboards included:
• Students providing feedback on the auto-grader
• AI-generated "clear your mind" videos
• Nicer auto-grader format with author templates
• More information with additional submissions
• Automatic Piazza postsChatbot highlights relevant information
• Grouping failed tests by error

 Mid-Fidelity: Testing Content and Functionality

01 | Initial Mid-Fi Design Sketches
We began designing the mid-fidelity prototype through asynchronous brainstorming and individual sketches. We noticed converging designs in our visions regarding the visual layout of the auto-grader feedback, the use of progressive disclosure and color-coding, and the placement and activation of the chatbot.
From our prototyping questions and synthesis of our initial sketches, we created our final mid-fidelity prototype, and we began testing how well it addressed both the width and depth aspects of our prior insights and solution ideas.
02 | Mid-Fi Prototype and Think-Aloud Testing
Guiding Question: 
What specific feedback and chatbot content would be most helpful to students’ independent learning and exploration?
The final mid-fi prototypes include three Figma frames:

1. Interactive landing page where students can expand and collapse auto-grader feedback, with buttons linking to topic primers and “Ask On Piazza”.

2. Chatbot interactions, including initiation and a chat window with suggested prompts. We envisioned the chatbot to be a smart navigation tool within Sail(), offering course resources on demand.
We tested the prototype with 7 participants through think-aloud sessions, using a Wizard-of-Oz approach to simulate the chatbot interactions.

We synthesized our findings using affinity diagrams, and developed corresponding design ideas.

This process led to the following key insights.
Insight 1
Progressive disclosure is key for students to quickly understand their general performance before looking into specific details.
Insight 2
Students need to clearly decipher the value of features from the buttons themselves to pursue the intended next steps.
Insight 3
While students feel apprehensive toward using generative LLMs like ChatGPT for assignments, chatbots trained on Sail() content would be trustworthy.
Insight 4
Students generally prefer chatbots for more simple questions to get immediate response and TAs for more nuanced help

 High-Fidelity: Testing Interaction

01 | High-Fidelity Testing
After iterating on our mid-fidelity prototype and creating our high-fidelity product, we conducted 7 think-aloud sessions, which helped us realize two key aspects of our prototype.
01
Encouraging Independent Exploration
The solution empowers students to independently explore and exhaust potential next steps before seeking assistance from course instructors.
02
Promoting Well-Prepared Questions
By completing these steps, students are able to ask more informed questions, providing detailed context, strategies they’ve tried, and theories about their issues.
02 | Hi-Fi Prototype Design

Deliver

 Feature Overview

 Feature 1 | Colored and segmented display
 Feature 2 | High-level summary of the submission
 Feature 3 | Contextual chatbot and dynamic next steps
 Feature 4 | Sail specific chatbot
 Feature 5 | Copy question template

 Interactive Prototype

 Reflections

Goals Revisited
By the end of the project, we were able to transform the auto-grader feedback from being an obstacle to a facilitator to student learning and debugging.

To do so, we implemented features that touch on each insight from our research phase relating to what effective auto-grader feedback looks like. 
Value Added to Our Client
The final design that we landed on incorporates is not merely a traditional design practices to improve the visual interface layout.

It is a blend of human-centered design and pedagogical research to enhance the overall student experience by enabling exploratory habits and goal-driven inquiries by students.