🎯 Why This Exists
Friends and colleagues kept asking me the same thing: “Why isn’t AI doing what I asked?” After years working across data analytics, automotive software, computer vision, and medical AI — I realized the gap was never about intelligence. It was about intuition. So I built this.
I’ve worn a lot of hats. Project management in data analytics. Automotive software engineering — the compliance side, control theory, sensor fusion. Computer vision and deep learning research. Even medical AI, specializing in uncertainty quantification and vision processing.
Through all of it, the people around me kept hitting the same wall with AI. Not because they weren’t smart — because nobody had explained how these things actually think. Or more accurately, how they don’t think.
💡 The Core Philosophy
The most important thing about understanding LLMs is not the algebra. It’s not the attention equations or the loss functions. It’s building the right mental model — the kind of intuition that lets you predict when AI will nail it and when it’ll confidently make something up.
That’s what this course is. Every analogy, every interactive, every chapter — designed to help you conceptualize how these systems work so you can actually use them well.
🛠️ How It Actually Got Built
Here’s the honest process, step by step:
Typed the curriculum from scratch
Key points, analogies, structure — all from my head first. Things like "context poisoning should go... somewhere. Which chapter?"
Used an LLM to refine and organize
Back-and-forth iterations to check placement, fill gaps, and stress-test the flow. Multiple rounds of "does this make sense here?"
LLM session for the technical plan
I wanted this on GitHub Pages — no server, no complexity, just static files anyone can fork. We mapped out the architecture together.
Generated the first chapter + template
I originally used Gemini's Stitch UI/UX tool to scaffold the visual design, then iterated from there.
Read chapter 1, fixed what wasn't right
Some content from a lesser model had subtle issues — oversimplifications, wrong emphasis. The kind of thing only a human who knows the material would catch.
Repeated for all chapters
Each one followed the same loop: generate a draft, read carefully, fix content, refine visuals. Easier one by one than batching.
Fixed content chapter by chapter
A dedicated pass through every page, checking accuracy, tone, and flow.
Used a smarter model to review everything
Final quality pass across the whole site — catching inconsistencies, checking analogies held up, flagging anything off.
🤔 What I Learned (Honestly)
I still overestimate what AI can do sometimes. It got some definitions subtly wrong — maybe from an earlier knowledge cutoff, maybe from training data quirks. And I’ll be honest: I was sometimes lazy. I didn’t always type out every detail I wanted, hoping the model would fill in the gaps. Sometimes it did. Sometimes it filled them with confident nonsense.
The biggest lesson: AI-assisted doesn’t mean AI-authored. Every page needed human judgment — the kind that comes from actually understanding the subject matter, not just being able to generate text about it.
I’ll also admit that switching to the Astro framework, adding the Chinese/English toggle, and supporting dark/light mode should have been planned from the beginning. My initial goal was just to get something out ASAP, which inevitably led to a late refactoring route. But honestly, taking this path proved exactly how powerful LLMs can be—migrating and refactoring an entire codebase took surprisingly little effort.
Looking back, the core site took about 7 days in total (5 days for the initial build, plus another 2 days for the Astro migration and extra features). Since I’ll be adding more content over time, this foundation was definitely worth the refactor. And if I’d followed my own advice initially — sat down, wrote more of the course content myself, had the LLM help me with the gaps, and spent more time on the overall flow — it probably would’ve been done faster. Part of the process was me wanting to see how automatic and smart Gemini 3 and the Gemini CLI had become. Curiosity is a valid reason to build things, even if it’s not the most efficient path.
Tools I used: Claude Code and Gemini CLI for development. Claude Opus and Claude Sonnet for content generation and review. A bit of GLM family models for light work and minor edits. I’m just sharing what I used — sticking with one tool is perfectly fine and probably preferred. Use whatever works for you.
If you want to know more about what I do and how I can help, check out the About & Contact page.