Transforming an AI RAG Framework to usable software for instructors.
Saves time by automating repetitive content creation.
Enables rapid updates to materials, keeping courses aligned with current knowledge or events.
Improves consistency across different types of content (e.g., aligning test questions with lecture material).
Supports scalability for large classes or online courses.
Assists adjunct and part-time faculty, who often have less institutional support.
I collaborated with Corpuskey, an ed-tech startup, to design an intuitive user-friendly interface that enables educators to create academic coursework with the help of an AI agent. Working closely with the CEO and four other Purdue UX students I led design decisions from research through final UI delivery and conducted a usability workshop to validate key interactions.
The Challenge.
Developing course materials—tests, lectures, study guides—takes up significant time for professors already stretched thin by teaching, grading, and research. CorpusKey’s AI offered to automate this task leveraging an AI agent RAG framework, but it's current state was unusable.
Our student UX team was tasked to bridge the gap of back-end AI and usable software.
Aligning the Team to Understand our AI Agent.
Our team was fairly new to AI when getting assigned this project. To gain alignment, shape our core assumptions, and communicate project goals we met in-person and remote weekly with John.
We mapped a user flow on how professors will interact with the AI agent, focusing on stages: outline (prompt), repository (train), draft (generate), revise (edit). This flow is essential to getting the user to his/her goal.
User Flow generated from our weekly meetings with John, alligning our team on the end user goal.
This activity prepared our team to start sketching/wireframing, but our team was struggling with understanding the outline stage of the user flow.
Breaking Down the Outline.
The outline is the main prompting interaction between the user and AI agent. It was pivotal to design this outline intuitively in order for the instructor to generate their desired output.
Currently, the outline is just a Microsoft Excel sheet, broken down into categories for the user. This method for the outline was very confusing for our team, let alone expecting professors to understand how to interact with this method. To help us understand the Excel sheet we divided it into highlighted sections. Then, we created a diagram flow showing our team how a user interacts with the outline.
The current outline Microsoft Excel sheet, including highlights to help our team understand its hierarchy.
User flow our team created to understand how a user interacts with the outline.
After completing these activities our team was aligned with understanding the outline prompting for this agent. This made us more comfortable with making our first concept wireframes.
Designing the First Concepts.
John wanted some starting wireframes from our team so he can start communicating to his developer our direction.
We created a few wireframe designs of how the user will interact with the AI agent based on our previous flows.
Home page of AI software, where user will edit and add courses for AI generation.
The outline, where the user prompts the AI agent.
The repository, where the user trains the AI agent.
Revise, where users will edit generated content.
Gaining Feedback from John.
We delivered our wireframes to John and the developer where they gave us feedback.

CEO of CorpusKey.
I really like the direction you guys are going.
Having the course builder in a timeline format supports the end goal.
Ensure that the user has the ability to create an account and log out.
Hold off on the "revise" step for now.
I would like the user to be able to generate off of templates as an option.
The user should upload their documents first, before prompting the AI with the outline.
After they finalize document upload, the user needs to lock their files in the repository.
In the future, the ability to add other collaborators to a project will be available.
Ensuring our Interfaces are Usable.
We created a mid-fidelity prototype of the primary user–AI interaction flow and tested it with two participants unfamiliar with the product. Our goal was to evaluate whether users could successfully navigate from input (relevant files/outline) to output (AI-generated content).
Task Completion:
Both participants were able to complete the flow from input to output with minimal guidance. They successfully generated course outlines and content without confusion.
Information Architecture:
Users understood the meaning and hierarchy of the "outline" structure, indicating that the mental model aligned well with user expectations.Affordance & Terminology:
Some elements lacked clarity. Participants were unsure of the purpose of creating a "class" and the role of tags, suggesting a need for improved affordance cues and clearer labeling.Familiarity & UI Comfort:
Participants noted the interface felt familiar and resembled tools like Google Drive, which contributed positively to usability and reduced cognitive load.Scalability & Feature Suggestions:
Users expressed interest in additional features, such as a "gradebook" view, indicating room for future expansion and feature alignment with educator workflows.
Delivering the Final Design.
After our usability testing, we designed a final iteration to present and hand-off to our sponsor, John Burr.
Added Log in Page:
John informed us to design a login page for users to "login" to the experience.For the time being renamed class to project:
Aligns with other content creation tools and better explain affordances.Removed Revise Step:
As requested by John, we temporarily removed the "revise" step from this iteration.Locking the Repository:
John informed us that the user must "lock" the repository before generation.Scalability & Future Features:
Added a sidebar for future affordances like the recommended "gradebook".Cleaned up hierarchy and UI clutter:
Fonts sizes, colors, etc.
Users can begin interacting and generating with the AI agent straight from the "homepage."
After selecting "New Project" users can establish their first project.
The first step to generating content is uploading relevant files the user would like the AI agent to analyze data from (train). Users can "lock" files with a checkbox.
The user must "lock" the files in order for the AI agent to analyze them.
Next, users are afforded the ability to prompt the AI agent via the outline.
Users can add additional topics to further prompt the AI agent. The more the user prompts the AI agent, the more accurate/curated the AI agent's output will be.
After prompting the AI, the user will select what materials they would like to be generated.
After all steps are completed, the user will be notified that the content is being generated, with an additional notification being received after content is ready for use.
Reflection & Retrospective.
This project sparked my interest in the intersection of UX and AI. Something I am actively exploring. This was my first UX project where I was designing how a user will interact with the AI, and how the AI will respond. Our project final deliverables provided a strong foundation for John to continue his AI startup.
What went well:
Our weekly meetings with John were collaborative, actionable, and highly informative.
Although this was the smallest UX team I’ve worked with (5 students total), we made up for it through intense collaboration during tri-weekly internal meetings.
We adapted to a complex, unfamiliar AI design space and learned how to communicate effectively across roles.
What didn't:
A significant portion of the project was spent trying to align on how the AI framework functioned, which led to early misunderstandings and impacted morale.
We could have proactively brought in more feedback by involving professors or external mentors earlier in the design process.
What to improve next time:
Spend more time upfront aligning on technical constraints and goals with both the client and teammates.
Take more initiative in identifying and recruiting users to test our early concepts.
Use additional methodologies (like card sorting, concept testing, or expert reviews) to validate ambiguous parts of the system.
Conclusion.
We presented our work in person to our UX Experience Studio class through a team slide deck, while John and his developer joined remotely via Zoom. Following the presentation, we held a final handoff meeting where we delivered our complete project documentation.
The following semester, a new team of Purdue UX students continued the work, building on our foundation to further develop John’s vision for AI-powered education. It was rewarding to contribute to a project with lasting impact and ongoing development.