Students accumulate materials across formats — PDF lecture slides, Jupyter notebooks, screenshots of diagrams, handwritten notes captured on a phone. The common workflow is to drop each into ChatGPT for a quick summary. The summary is useful, but it disappears into the conversation history, disconnected from every other summary generated before it.
The result: each document is understood individually, but the relationships between them are invisible. You've studied Transformer, BERT, and GPT separately, but can't trace how self-attention behaves differently across all three — because each summary exists in its own silo.