Over the past two years, I've built and shipped five AI-powered tools — from an intelligent calendar assistant to a philosophy debate platform. Here's what I've learned about building with large language models as a student developer.
Start with a Real Problem
Every project that stuck started with a personal frustration. Orbit began because I was tired of manually scheduling around my classes. Slide Agent came from watching classmates spend hours formatting presentations when their content was already solid.
The projects that started as "I want to build something with GPT" never went anywhere. The ones that started with "I wish this existed" became tools I actually use.
The LLM is Not the Product
This is the biggest lesson. An LLM API call is a component, not an application. The real engineering is everything around it:
- Context management — what information does the model need, and when?
- Failure handling — what happens when the API is slow, wrong, or down?
- User experience — how do you make AI interactions feel natural, not clunky?
- Cost control — how do you keep API costs reasonable at scale?
With Dialogue, the philosophy platform, I spent maybe 20% of development time on the LLM integration and 80% on the debate interface, argument tracking, source citation system, and the overall conversational flow.
Ship Early, Iterate Constantly
My first version of Orbit was embarrassing. The calendar sync broke constantly, the natural language parsing was brittle, and the UI was a mess. But people used it anyway, because the core idea — "tell your calendar what you want in plain English" — was compelling enough.
Each project taught me to separate the core value proposition from the polish. Ship the core first. If people want it, they'll tolerate rough edges while you improve.
The Tech Stack Matters Less Than You Think
I've built these projects with:
- Next.js + React for web apps
- Swift for native macOS (Stack)
- Flask + PyTorch for ML deployment (YOLOv7)
- Vue.js for experimental interfaces
The common thread isn't the framework — it's the willingness to pick whatever tool fits the problem and learn it fast. As a student, your biggest advantage is that you have permission to be a beginner.
What I'd Do Differently
If I could restart, I'd spend more time on:
- Testing — not unit tests for the sake of coverage, but end-to-end tests that catch real user workflows breaking
- Documentation — future-me always suffers when past-me didn't document the "why" behind architectural decisions
- Saying no — every project grew scope beyond the original vision, and the extra features were rarely worth the complexity
The Best Time to Build
There's never been a better time to build AI tools. The APIs are accessible, the models are capable, and the design space is still wide open. If you're a student with an idea, build it. Ship it. Learn from it.
The worst that happens is you have a portfolio project that demonstrates real engineering ability. The best that happens is you build something people actually use.
These are my personal reflections after two years of building. Your mileage may vary, but I hope something here is useful.