Kaizen R/W

AI surpasses humans in many ways. Writing isn't one of them.

Below is the v2.4.3 specification of a manuscript-enhancement pipeline I built, ran end-to-end on my own novel-in-progress, and walked away from. Twenty-one named agents, nine phases, schema-validated artifacts, hash-pinned plan invalidation, cycle-tiered thresholds, escalation packets, override format. Implementation lives at github.com/RobThePCGuy/KaizenRW (private). A patent application covering this approach was filed and is not being pursued.

The pipeline works. End-to-end runs produced clean, polished prose. Too polished. The output is good in the way an article is good, not in the way a novel is good. It is too polished to feel like mine. That is the actual problem. AI succeeded at generation, well enough that the result stopped being recognizable as the author's. A pipeline that rewrites prose normalizes prose, and normalized prose is no longer the writer's.

I built a suite of tools instead. Three editorial pains, one non-generative wedge. Weaver organizes the drafts you've already written. Rewriter does a careful chapter-by-chapter pass with the writer at every accept step, never running end-to-end. R/W reads with you and doesn't rewrite your book. The reader is at krw.kaizenrw.com; the suite is at kaizenrw.com.

Leaving the spec, the implementation reference, and the IP claim together. Take what is useful. The patterns inside (schema versioning, foundation-lock hashes, cycle-tiered thresholds, constraint registry plus mechanical-sign verification, escalation packets) hold up in domains where end-to-end AI generation is appropriate. The suite I built instead keeps the writer at every accept step, because end-to-end was the part that didn't work for fiction.

Raw text: praxis.txt.

What I built instead is a reader.

The author writes. The AI notices. The author keeps the broken pieces.