How I Built the Sagent app
(without writing a line of code, while barely touching the keyboard)
Inspiration: Chatting with the Greats
I’ve always wanted a way to ask real-life questions to clear thinkers - Buddha, Laozi, Sagan, Twain. Sagent is my attempt: an iOS app where you pick a wisdom figure and have a conversation about whatever’s on your mind, getting guidance from humanity’s greatest minds on the issues and problems of today.
I’m not a professional engineer and, before this, I hadn’t shipped a production line of code. I built Sagent by treating AI tools as collaborators and keeping the process disciplined and incremental.
(Models and tools mentioned here reflect what I used as of June–July 2025; this space changes fast and so does my workflow.)
Where I’m coming from
Everyone starts from a different place. I didn’t arrive empty‑handed: I’ve used Python for data analysis, worked closely with engineers, studied data structures, and taken CS classes. That background mostly prepared me for how to think about the problems - breaking work down, naming things well, and caring about the shape of systems. If your starting point is different, that’s fine - the workflow below still applies.
Tech stack: simple, standard, secure
Backend: Node.js (Express). REST API + SSE for streaming chat.
Database: MongoDB for accounts and conversation history.
Frontend: React Native with Expo; tested in Expo Go and shipped via TestFlight.
AI: OpenAI models for responses; each “sage” is flavored by a short document I wrote to capture their voice and perspective.
Security & privacy: JWT auth, bcrypt, CORS, Helmet, rate limiting. Conversations aren’t used to train models.
I chose a stack an AI pair‑programmer could work with and that I could reason about. Don’t be intimidated by any of the funny words above you don’t know yet—understanding is a couple of short YouTube videos and a few clarifying questions to ChatGPT away. When in doubt, pick the boring, well‑documented option—your agents will do better work, and you can always swap parts later.
The toolset (roles, not ranks)
ChatGPT o3 w/ Deep Research + GitHub access — planning partner, spec co‑author, and auditor of diffs/PRs. (insert GPT-5-Pro w/ Deep Research now)
Cursor (Autopilot) with Claude Sonnet — code implementer for multi‑file changes. I used Opus for the occasional thorny problem.
Wispr Flow — I voice‑coded ~95% of the time.
UX Pilot — quick wireframes and UI ideas.
MidJourney — profile images for the sages.
n8n — some side automation (I’ll cover that separately).
Time to MVP: ~60–80 focused hours over a few weeks.
Out‑of‑pocket cost: ~$300–$450 (pro subscriptions, a couple hundred in tokens across projects, misc.).
How I used ChatGPT and Cursor together
I work in layers:
Design with ChatGPT. I describe the feature, user flow, edge cases, and constraints. We turn that into a short spec with checklists.
Implement with Cursor (Sonnet). I feed Cursor one small step at a time (“build the GuidedIntro component with steps A/B/C”; “add /chat/switch-figure API”; etc.). Cursor edits/creates files, then shows me the diff.
Audit with ChatGPT. I have ChatGPT read the diff on GitHub and comment—risks, style issues, null handling, security notes. I fix or send the patch back through Cursor.
Two concrete examples:
Guided flow for new users. I talked through how the app should introduce the sages (what they’re good at, starter prompts, a short on‑ramp). ChatGPT and I broke it into components and state; Cursor built each piece. ChatGPT’s review caught a null‑state edge case before shipping.
Adding a new wisdom figure mid‑conversation. Harder: a new “voice” has to join an ongoing thread and feel context‑aware. Instead of “Cursor, make it work,” I first designed the data flow with ChatGPT—how to package recent messages, how the newcomer introduces itself, and how to avoid repeating earlier guidance. Then Cursor implemented it step‑by‑step. Order matters: think → spec → small tasks → implement → audit.
A note on prompting: the magic isn’t fancy jargon - it’s context and clarity. I keep a Google Doc of draft prompts (very Gen X of me). I often ask the model to outline a plan first, pause, then generate code only after we agree on the approach. If I’m stuck, I’ll ask two or three models the same question and compare answers.
Development workflow (tight loops)
Define the next smallest useful thing.
Write the spec with ChatGPT.
Have Cursor implement.
Run it on device (Expo Go).
Fix, refactor, remove dead code.
Repeat.
Designing the interface
UX Pilot gave me starting layouts. Cursor turned those into React Native components: chat bubbles (left/right + avatars), auth screens, navigation. I added voice input for users - if I can talk to my IDE, you should be able to talk to the Sages (they don’t talk back yet audibly - but they will soon).
Crafting the Sages (content layer)
Each figure has a “voice” doc I wrote: themes, tone, perspective. When you ask something, the backend threads in relevant snippets so answers stay on‑character. The MidJourney portraits make the roster feel welcoming. Eventually I want interactive talking avatars, but the tools aren’t quite there yet (and too expensive token-wise).
What helped (and what didn’t)
Small tasks win. Cursor shines when the scope is tight and the goal is clear.
Audit everything. AI code runs - it isn’t automatically right. Reviews caught things I missed.
Don’t skip cleanup. AI leaves lint and extras - periodic refactors matter.
IDE still matters. I tried the talk‑to‑build experiments. Great for simple apps. Once complexity rises, a real IDE with agents inside it is the sweet spot.
Abstract complexity behind a good UI. Many “hard” things can be taught to users gently through UX, or hidden behind sane defaults.
You can learn fast. A few YouTube videos plus careful prompts go a long way. When confused, ask an AI—or a few of them—and triangulate.
What Sagent says (quietly) about what’s coming
For me, these tools made it possible to reach working MVP quickly and learn from real usage before raising money or hiring. Lowering the cost of the first mile means more software gets made by more people. That doesn’t erase developer jobs - quite the opposite if you ask me - it expands the surface area where expertise is needed - especially once a project grows beyond what one person can own. These new tools are about lowering the cost of the first step so more software gets built.
I can see why enterprise use cases report more failures than use cases like mine. My experience in enterprises suggests they are cobbled and complex webs of performative nonsense. I suspect they will have to be dismantled before they can be rebuilt.
The tools will keep changing. Treat constant experimentation as part of the work: try new models, new agent modes, and new testing helpers. Don’t wait for the perfect tool. We can do a lot with what exists today.
Closing: taste, attention, agency
Even with strong agents, the human jobs don’t go away: taste (what’s worth building and how it should feel), attention (edge cases, cleanup), and agency (stick with it and shepherd the humans/agents toward a clear goal). Better tools will keep reducing friction; those three still matter. I, for one, am interested in solutions that automate the validation layer.
I’m going to keep making things - Sagent now, BloomEDU, DreamFlyer, Frolic, BuddhaUR, Urban Beaver. Most experiments will stay small. That’s fine. The point is that it’s finally possible for one person to get from an idea to something people can actually use - and to learn from it quickly.
If any of this resonates, go build yours.