How the Expert Factory Works
The Expert Factory transforms a real human expert into an AI coaching clone. It does this through an 18-block pipeline organized into 8 phases. Each block is a self-contained workspace with instructions, inputs, a chat assistant, and outputs.
You don't need to understand the technology. The workbench guides you through every step.
01 The Big Picture
An expert has years of knowledge, methodology, personality, and taste trapped in their head, their content, and their conversations. The factory extracts that into structured data, validates it against the expert's own quality standards, and compiles it into an AI that thinks, speaks, and coaches like them.
The factory doesn't just build the clone โ it also creates the lead magnet, onboarding flow, design system, and deployment config. One pipeline run = complete expert operating system.
02 The 8 Phases
Research & Discovery
Gathers everything publicly known about the expert โ websites, social profiles, podcasts, frameworks, market position. Runs 5-8 parallel research threads and creates a team-accessible knowledge notebook.
You enter the expert's name, paste their URLs, and upload any materials you have. The system does the rest.
Framework Design
The most critical step. Analyzes the expert's raw content to find their unconscious quality gates โ what makes their advice THEIRS and not generic coaching. Produces a structured framework (like FORGE+SHIFT for Samuel) that becomes the anti-slop filter for the entire clone.
Upload transcripts and methodology docs. Add notes about patterns you've observed. The system does 3-pass analysis. You MUST review and approve before anything downstream runs.
A weak framework means every extraction downstream defaults to generic LLM output. This is where you invest time.
Demo & Calibration
Creates a lightweight sales demo from research (so the expert sees their clone potential immediately) and generates scoring rubrics from the framework (so the tester knows what "good" means).
Runs automatically once Phase 2 is approved. Demo can be used in sales calls. Rubrics feed the tester in Phase 7.
Deep Extraction
The heavy lifting. Five parallel extractors pull apart the expert's IP into structured data: how they think (soul), how they speak (voice), what they teach (frameworks), what they recommend (resources), and what they sell (offers).
Upload transcripts and content into each extractor. They run the expert's framework as a quality filter on every extraction. Results are structured JSON files.
Quality Audit
Cross-references all extraction outputs against the expert's framework. Identifies gaps โ things missing from the extractions that the clone would need to know. Generates follow-up questions.
Review the gap report. Zero critical gaps = proceed. Important gaps = decide if you can live with them for alpha. Nice-to-haves = ignore.
Launching with critical gaps means the clone has blind spots that users will discover. Better to catch them here.
Build & Compile
Weaves all extractions into the final clone: 12-section system prompt, knowledge files, tool configurations, interactive lead magnet, gamified onboarding flow, and brand-matched design system.
Runs automatically once Phase 5 is approved. The system prompt is the clone's brain. Knowledge files are its memory. Tools are its hands.
Test & Validate
Runs 25-50 simulated conversations against the clone across 8 categories (basic coaching, edge cases, adversarial, failure recovery). Scores each conversation on 10 dimensions. 85%+ overall to proceed.
Review the test results. 10 metric bars show scores per dimension. Zero governance violations required. Focus on dimensions below 7/10.
This is go/no-go. Without testing, you're shipping a clone you hope works. The tester proves (or disproves) it.
Launch
Packages everything into a boarding pack (the expert's delivery artifact), publishes all pages, and generates deployment configs. This is when the clone goes live.
Generates the boarding pack, publishes all artifacts to their domains, produces deployment configs for the chat platform.
03 Autonomy Levels
Every block has an autonomy level that tells you how much human involvement is needed. For your first expert, most things run as Human Gate so you can verify every step. As you prove the system, you dial blocks toward Full Auto.
Runs without human input. You can review the output, but the system doesn't wait for you.
Runs automatically but benefits from human input. Upload more files, add notes โ the more you give it, the better the output.
The pipeline STOPS here until a human reviews and approves. Nothing downstream runs until you say go.
Requires you to do something โ upload files, fill in a form, paste a URL. The block can't run without your action.
04 Quality Gates
Three explicit checkpoints where the pipeline stops and waits for human approval.
Framework has input criteria, output criteria, anti-patterns, and 3+ source citations per section
Why: Everything downstream uses this framework as its quality filter
Zero critical gaps. Coverage โฅ80%. Follow-up questions generated for important gaps.
Why: Compilation with critical gaps produces a clone with blind spots
Overall score โฅ85%. All dimensions โฅ7/10. Zero governance violations. Zero anti-pattern matches.
Why: This is the go/no-go for production deployment
05 Where Outputs Go
The factory doesn't just produce files. Its outputs flow into a network of services that together create the expert's complete operating system.