What I Learned Designing an AI Assistant I Actually Trust
I Built a Personal AI Assistant. Here's What I Got Wrong First
Most AI assistant setups fail for the same reason most productivity systems fail.
They’re built for a fantasy version of you — someone with infinite patience to prompt, review, approve, and babysit. The real you has 47 things competing for attention and zero tolerance for friction.
I spent the last week building a personal AI assistant. Not a chatbot. Not a “co-pilot” that waits for instructions. An actual assistant — one that shows up proactively, handles tasks autonomously, and knows when to ask versus when to just do.
Here’s what I learned about designing AI systems you can actually trust.
The Problem With Most AI Setups
There are two failure modes I see constantly:
Too dumb: The AI can only respond when you ask. You’re still the one remembering to check things, triggering workflows, and doing the coordination work. You’ve added a tool, not removed load.
Too dangerous: The AI has access to everything and can do anything. One hallucination, one misunderstood context, and it’s sending emails to clients or deleting files you needed.
Most people oscillate between these extremes — either micromanaging their AI or giving it access they later regret.
The missing piece isn’t better prompts or fancier models. It’s system design — specifically, designing the boundaries of autonomy.
The Three Questions That Shaped My System
Before I touched any tools, I forced myself to answer three questions:
1. What decisions am I comfortable delegating completely?
Not “what tasks” — decisions. Tasks are mechanical. Decisions require judgment.
For me: prioritising what research to surface, deciding when a task is urgent enough to interrupt me, choosing what context to include in a morning briefing. These are judgment calls, but low-stakes ones. If the AI gets them slightly wrong, I lose a few minutes, not a relationship or a deal.
2. What actions should never happen without my explicit approval?
Anything that leaves my “system” and touches the outside world with my identity attached. Sending emails. Posting publicly. Scheduling meetings with other people.
The heuristic: if a mistake here would require me to apologise to another human, it needs approval.
3. Where do I actually lose time to friction — not to the work itself?
This one surprised me. I thought I’d automate the “big” tasks. Instead, the biggest wins came from eliminating transitions — the mental overhead of context-switching between tools, remembering what I was supposed to check, and pulling together information scattered across five different places.
What Autonomy Actually Looks Like
My assistant — I call her Mira — now handles three categories of work without asking:
Daily orientation: Every morning at 8am, she sends me a briefing. Weather, calendar overview, priority tasks, and one piece of research relevant to what I’m working on. I didn’t ask for this. She just shows up.
Research accumulation: Every afternoon, she appends fresh research to a running document. AI developments, fintech news, workflow ideas. I never have to “remember to research.” It just accumulates.
Task capture: When I voice-note a task into Telegram, it appears on my task board. Correctly categorised. No friction. I don’t even open the task app most days.
None of this is magic. It’s plumbing — connecting systems that already existed. But the design is what makes it useful: she operates within clear boundaries, on a predictable rhythm, with outputs I can trust without reviewing.
The Approval Matrix
Here’s the framework I use to decide what gets automated vs. what requires my sign-off:
Low stakes if wrongHigh stakes if wrongReversibleFull autonomyAutonomy with loggingIrreversibleApproval with suggestionHard block — I do it myself
Full autonomy: Mira can read my email, research topics, organise files, and update my task board. If she gets something wrong, I can fix it in seconds.
Autonomy with logging: She can modify documents and add calendar events, but I can see exactly what changed. Mistakes are visible and reversible.
Approval with suggestion: She can draft an email or propose a meeting time, but I press send. She does the work; I own the decision.
Hard block: Anything public-facing, anything that commits me to other people, anything that deletes without backup. She doesn’t touch these.
This matrix isn’t about the AI’s capability. It’s about my risk tolerance and the cost of mistakes.
What I Got Wrong Initially
Two mistakes worth sharing:
Mistake 1: Automating tasks instead of decisions.
My first instinct was to automate outputs — generate this report, send this summary. But the real leverage came from automating inputs — what information reaches me, when, and in what form.
When you automate outputs, you still have to review everything. When you automate inputs, you change what you’re even thinking about.
Mistake 2: Designing for capability instead of rhythm.
I initially built Mira to respond to requests. But the most valuable thing she does is show up unprompted at specific times — morning, mid-afternoon, evening. Not because I asked, but because that’s when I need orientation.
The best AI systems aren’t reactive. They’re rhythmic.
The Design Principle That Changed Everything
Here’s the line I keep coming back to:
Presence matters more than tasks.
I don’t need an AI that can do everything. I need one that shows up at the right moments with the right context, and otherwise stays out of the way.
That meant designing for fewer interactions, not more. Consolidating information instead of scattering it. Batching instead of interrupting.
Mira sends me three messages a day, max. That constraint forced better design than any capability expansion would have.
What This Means for You
You don’t need to build what I built. The tools don’t matter — I used a mix of automation platforms, messaging apps, and cloud storage that fit my existing workflow.
What matters is the thinking:
Start with decisions, not tasks. What judgment calls are you comfortable delegating?
Design for rhythm, not reaction. When do you need information to show up, unprompted?
Use the approval matrix. Map every potential AI action to a quadrant. Be honest about what’s actually reversible.
Constrain on purpose. Fewer touchpoints, tighter boundaries, more trust.
The goal isn’t an AI that can do everything. It’s an AI you don’t have to think about — because you’ve already thought through what it should do.
I’m working on a detailed implementation guide for those who want to build something similar — the actual tools, the specific workflows, and the configuration that makes it work. If that’s interesting, reply and let me know what you’d want it to cover.


