A drag-and-drop web app for teaching Python memory models with guided practice, free-form testing, and automatic grading.
During my first years at UofT, one of the most annoying parts of studying computer science was drawing memory model diagrams by hand, it was always such a slog. I wanted an app that made it easy to practice using a drag-and-drop canvas, removing the repetitive work of constantly drawing and redrawing frames and objects. At the same time, I wanted a way to quickly identify mistakes in my memory models, with clear explanations of why they didn’t match the underlying code.
The core of MemoryLab is a canvas where students drag and drop memory model objects, just like sketching on paper but digital. The canvas supports all the fundamental Python data types: primitives (integers, strings, booleans), collections (lists, tuples, sets, dictionaries), and custom objects. Each element on the canvas can be connected to others via references, creating a visual representation of how Python stores and links data in memory.
The call stack is also part of the canvas. Students can create function frames that represent different scopes in their program, add variables to each frame, and point those variables to values stored elsewhere on the canvas. This mimics how Python's execution model actually works: frames on the call stack hold variable names, and those names reference objects in the heap.
When a student completes their diagram, the entire canvas state is serialized into JSON. Each memory object becomes a node with a unique ID, type, and value. References between objects are captured as ID mappings, creating a directed graph structure. Function frames are serialized with their variable mappings, and the call stack order is preserved. This JSON representation is then sent to the backend API for validation.
The serialization process builds a graph where:
The validation algorithm treats memory models as graphs and checks for graph isomorphism—ensuring there's a one-to-one mapping between the student's canvas drawing and the expected memory state. The expected model is generated by executing the Python code, and the student's job is to accurately draw what the memory looks like at that point in execution.
The algorithm performs several key checks:
The algorithm uses a visited map to handle circular references without infinite loops, and maintains bidirectional mappings (expected→student and student→expected) to enforce the bijection property.
Given the Python code:
a = 1
b = 2
c = a
lst = [a, b]
lst2 = [a, lst, c]
lst[1] = 4
After running this code, the expected memory state is compared against the student's canvas drawing:

How the Validator Checks the Canvas:
Frame Check: The expected model has a __main__ frame; the validator confirms the student drew this frame ✓
Variable Check: The expected model has variables a, b, c, lst, and lst2. The validator confirms all are on the student's canvas ✓
Variable a: Expected to reference id1 (int with value 1). Validator checks the student's canvas points a to the same object ✓
Variable b: Expected to reference id2 (int with value 2). After the mutation lst[1] = 4, variable b still points to 2, but the canvas must show that lst at index 1 now contains id6 (int with value 4) ✓
Variable c: Expected to reference id1, the same object as a (aliasing). Validator confirms the student drew c pointing to the same box as a ✓
Variable lst: Expected to point to id4 (a list). The validator recursively checks each element:
Variable lst2: Expected to point to id5 (another list). Validator recursively checks each element:
a and c) ✓lst object) ✓a) ✓Bijection Check: Ensures the student's ID mappings create a perfect one-to-one correspondence with the expected model (no duplicate IDs or misaligned references) ✓
Orphan Check: Verifies the student didn't draw extra objects on the canvas that aren't referenced from any variable ✓
The algorithm returns a detailed error report if any check fails, highlighting exactly which object or reference is incorrect on the student's canvas.
The validation system doesn't just return a pass/fail result—it provides structured, actionable feedback that helps students understand exactly what's wrong with their canvas. Each error includes:
Error Structure:
function "__main__" → var "lst" → [1] →)Visual Highlighting:
The real power of this system is the visual feedback. When an error is detected, the element IDs are used to highlight the incorrect objects directly on the canvas. For example, if a student incorrectly maps variable a to id3 instead of id1, the system highlights both the variable in the frame and the incorrectly referenced object, making the mistake immediately visible.
For errors involving multiple elements (like missing list elements or aliasing issues), the relatedIds field ensures all relevant objects are highlighted together, showing the full context of the problem.
Example Error Messages:
All errors are consolidated in a dedicated Errors Tab, providing an easy-to-scan list of all problems. This tab acts as a checklist: students can work through each error one by one, fixing issues on the canvas and seeing the highlighted elements update in real-time. The combination of the error list and visual highlighting on the canvas makes debugging intuitive and systematic.
Iterative Learning:
This feedback system transforms debugging into a learning opportunity. Instead of trial-and-error guessing, students can:
The system handles complex scenarios gracefully: circular references, nested containers, aliasing bugs, and call stack errors all produce clear, localized feedback. Students learn to think about memory models more precisely because the feedback directly reinforces correct mental models.
How to build core interactive experiences in web apps, mainly pages, portals, and tools, building on my previous experience and pushing into more complex UI state and interactions.
Designing and implementing custom algorithms to validate user work. In particular, building a validation system that compares two memory models, checks structural equivalence, and handles tricky cases like shared references and cycles, while keeping results consistent, covering edge cases, and avoiding noisy false positives.
Improved web design and development skills, including layout, spacing, and overall UI clarity, while also working closely with backend systems to keep the experience fast, reliable, and easy to iterate on.
How to write clearer, more actionable feedback so students can understand mistakes and fix them faster, and how to integrate that feedback into a clean, compact UI that stays out of the way and helps them work through issues one problem at a time.