Back to blog

Dev Diary #3

April 9, 2026

Dev Diary #3

Systems touched:

  • life.os, my personal dashboard

  • tech.interview.practice, an interview coaching tool

Principles explored:

  • Privacy-preserving logging and audit trails

  • Voice-first interaction design and silence detection

  • Graceful degradation in real-time speech systems

  • Prompt engineering for domain-specific AI coaching

  • Client-side state synchronization without hydration mismatches


Today was split between two very different kinds of work: infrastructure for introspection, and UX refinement for real-time interaction.

In life.os, I've been building out the developer diary system — the very thing I'm writing in now. The work involved rethinking how commits get collected, redacted, and shaped into reflective prose. The core tension here is transparency without exposure: I want genuine records of what I built, but certain details (API endpoints, database schemas, user names) have no business being permanently logged. I refactored the redaction pipeline to be content-aware rather than just path-based. Instead of surgically removing lines, I'm now filtering at a semantic level — checking whether a message references confidential patterns before it ever gets written. This feels more honest than pretending the work didn't happen; it just acknowledges that not every technical choice belongs in a public diary.

In tech.interview.practice, the work was almost entirely about voice interaction. I replaced a text-based mentor with a voice coach ("Sam"), which fundamentally changed how the system needs to behave. Voice introduces latency, ambiguity, and the need for real-time feedback that text doesn't require. I spent time tuning the voice activity detection (VAD) threshold — the moment when silence becomes "you're done talking." Too eager, and it cuts users off mid-thought. Too conservative, and the system feels sluggish. I also discovered that when the code editor is open, VAD should pause entirely; otherwise, keyboard clicks and typing noises trigger false positives.

The other layer was prompt engineering. I realized Alex (the coach) was reading code aloud unnecessarily, wasting tokens and confusing users. Now I explicitly tell the coach to reference lines by number instead. This meant prepending line numbers to the code sent to the API — a small change with outsized impact on the interaction model.

I also fixed 25 audit issues across security, accessibility, and reliability. Most were straightforward (updating dependencies, adding ARIA labels), but a few revealed deeper problems: hydration mismatches in useEffect hooks, Monaco editor cursor jumping on updates. These aren't glamorous bugs, but they're the friction points that erode trust in interactive tools.

Home
About
Resume
Projects
Blog
Press
Search