MoFlo: Persona System

Designing a Persona System for Consistent AI-Generated Content
Despite strong AI generation, content tone varied session to session. SMBs wanted consistency across platforms, but manually retyping brand instructions into every prompt created friction. I designed MoFlo's persona system to encode brand identity into the generation flow.
The Problem: AI Was Flexible. Brands Are Not.
Brand voice wasn't sticking. AI could generate captions and visuals, but identity drifted. Users were re-typing tone instructions every session, getting different emotional tone across platforms, making manual edits to "fix voice" after generation, and experiencing confusion about who the content was actually for.
Signal: What Triggered the Investigation
As adoption grew, we began monitoring qualitative feedback alongside usage data. We were seeing high draft generation rates but low direct publish rates, frequent manual edits before scheduling.
The AI was producing content. Users were still doing the thinking. If every draft required rewriting, the system wasn't reducing cognitive effort — it was shifting it.
User Research
Most users didn't think in adjectives. They thought in: who they were speaking to, what they wanted to be known for, how they wanted to be perceived, and what they avoided saying.
"I want it to sound like us, not like AI."
"We're premium but approachable. I don't know how to explain that to a model."
"Sometimes it's too salesy. That's not our brand."
Initial Solution: Manual Persona Builder
The first iteration involved a manual flow: create a persona with tone/audience/writing rules, select it before generation, and AI applied persona rules behind the scenes.
What Didn't Work
Despite shipping the persona builder: only 38% of users created at least one persona, only 17% reused a persona, 60% still included manual tone instructions, 30% regenerated content due to tone mismatch, and 50% of the time persona selection was skipped entirely.
"I just type what I want." — "I don't remember what this persona does." — "It's faster to just tell it again."
Rethinking the Model
If structured personas weren't naturally adopted, what would make identity feel intuitive? The key behavioral shift: from rewriting identity to selecting it. The goal wasn't to eliminate prompting — it was to eliminate repetition.
Final System: Making Identity Prompt-Native
The solution preserved user control while quietly reinforcing consistency. Instead of treating personas as a separate setup feature, I moved identity directly into the generation layer.
Lightweight Persona Creation: Redesigned around base archetypes instead of asking users to construct identity from scratch.
Active Persona Visibility: During generation, the selected persona was always visible. Users could see what rules were being applied, edit them inline, and understand why output looked a certain way.
Prompt → Persona Bridge: If a user typed additional tone rules in the prompt ("Make it less salesy", "More direct", "Target investors"), the system surfaced a subtle suggestion: "Add this to your persona?" Manual behavior became structured data. Instead of fighting user habit, the system absorbed it.
Impact
Within one month: 2.3× increase in persona reuse across sessions. 34% reduction in manual tone instructions inside prompts. 21% decrease in regeneration due to tone mismatch. 18% increase in direct publish rate after first draft.
But the biggest change wasn't in metrics — it was in behavior. Users stopped rewriting their identity every session. They started selecting and refining it.
Learnings
AI features don't fail because they lack capability. They fail when they misalign with behavior. Three key takeaways: Structure must absorb habit. Reduce repetition, not control. Visibility builds trust.