A Structured Method for Using AI Correctly
How I Combine Four Compartmentalized ChatGPT Projects with an Iterative ChatGPT–Claude QA Loop to Produce Accurate Technical Papers
David LaPoint
PrimerField Foundation
Introduction
Artificial intelligence is often criticized as unreliable, prone to hallucinations, or incapable of producing trustworthy technical writing. In my experience, these outcomes are rarely failures of the AI systems themselves. They are failures of workflow design.
Over extensive daily use, I have developed a structured system that does two things simultaneously:
1. Compartmentalizes work inside ChatGPT into four distinct projects, each with a narrowly defined role and explicit constraints, preventing assumptions from bleeding across tasks.
2. Applies an iterative, adversarial quality-assurance loop between ChatGPT and Claude at the publication stage, forcing logic, mathematics, conclusions, and references to withstand repeated independent critique.
This method does not depend on trusting any single AI output. It is intentionally designed so that trust is unnecessary.
The First Principle: AI Is a Tool, Not an Authority
At no point in this workflow is AI treated as a source of truth, a final authority, or a substitute for human judgment.
Instead, AI is treated as:
- A fast research assistant
- A technical analyst
- A drafting tool
- A critical reviewer
Every stage of the workflow assumes errors are possible and structures the process so those errors are exposed rather than concealed.
Everything that follows is built on this principle.
Part I: The Four-Project Compartmentalized Research Method
For theoretical research and long-running investigations, I do not work inside a single continuous conversation. Instead, I separate my work into four distinct ChatGPT projects. Each project has a specific purpose and an explicit set of allowed behaviors.
The goal of this compartmentalization is to prevent:
- Conceptual drift over time
- Confirmation bias
- Assumption bleed-through
- Invisible error propagation
Each project enforces a different mode of thinking.
Project 1: Neutral Technical Analysis
Purpose: Perform calculations, simulations, geometry checks, field analysis, or other technical work using only explicit inputs.
Rules:
- No theoretical interpretation
- No expected outcomes
- No narrative fitting
- All inputs are treated as test objects, not evidence
Benefit: This enforces analytical neutrality. The AI is prevented from unconsciously shaping results toward what it may infer the user expects to see.
Project 2: Canonical Theory Repository
Purpose: Store the theory in a stable, controlled form that serves as a reference baseline.
Rules:
- The content is treated as canonical
- It is not rewritten, refined, or interpreted unless I explicitly direct changes
- The project is not an evolving discussion
Benefit: This prevents theory drift. Evaluations are always performed against a consistent and unchanging statement of what the theory actually claims.
Project 3: Evaluation and Testing
Purpose: Compare new information, observations, or analytical results against the canonical theory.
Rules:
- The project is explicitly allowed to critique the theory
- Mismatches, missing assumptions, and non sequiturs must be flagged
- Supporting evidence must be separated from interpretation
Benefit: This creates a dedicated environment for falsification-style thinking. Agreement and disagreement are treated as equally informative.
Project 4: Writing and Publication
Purpose: Convert research, analysis, and conclusions into structured papers suitable for publication.
Rules:
- Prioritize clarity and precise framing
- Claims must be scope-controlled
- Stylistic edits must not alter technical meaning
- When a paper reaches this stage, a formal verification process is triggered
Benefit: This isolates writing from analysis and testing. It provides a controlled pipeline for producing publishable artifacts without contaminating technical content.
Why the Four-Project System Works
When all tasks are performed in a single context:
- Assumptions influence calculations
- Theory shapes interpretation
- Errors propagate invisibly
- Context memory affects tasks it should not
By compartmentalizing work:
- Each project enforces a specific mode of reasoning
- Inputs and assumptions remain visible
- Cross-checking becomes meaningful
- Long-running research remains stable and organized
In practice, Projects 1–3 are where analysis, testing, and falsification occur. Project 4 is where validated results are converted into a paper.
That transition is where the second method becomes essential.
Part II: The Iterative ChatGPT–Claude QA Loop
When a subject becomes important enough to publish, I do not rely on a single AI system. Instead, I use a structured, iterative cross-review process between ChatGPT and Claude that functions as an adversarial audit.
Each system is required to challenge the other's work.
Step 1: Research and Initial Drafting in ChatGPT
The process begins in ChatGPT with:
- Exploratory discussion
- Background research
- Data gathering and organization
If the topic proves significant, I then provide:
- The premise to be proposed
- The reasoning framework to use
- The conclusions reached during discussion
- Any constraints on scope, tone, or certainty
ChatGPT produces a complete draft. This draft is not assumed to be correct. It is the first structured version.
Step 2: Independent Critical Review by Claude
The draft is given to Claude in a separate context. Claude is instructed to perform a cold review, including checks for:
- Logical consistency
- Mathematical correctness and unit integrity
- Unsupported or over-extended conclusions
- Ambiguous wording
- External reference accuracy where applicable
Claude's task is not to defend the paper, but to find weaknesses.
Step 3: Claude Revises the Paper
Claude rewrites the paper, incorporating:
- Corrections to logic and mathematics
- Scope tightening
- Improved clarity
- Removal or softening of unsupported claims
This produces a second-generation draft.
Step 4: Fresh Review by ChatGPT in a New Conversation
Claude's revised draft is taken back to ChatGPT in an entirely new chat. This deliberate context reset prevents hidden assumptions from influencing the review.
ChatGPT examines the paper and flags:
- Errors introduced during rewriting
- Logical regressions
- Mathematical or unit mistakes
- Meaning shifts or over-corrections
- Problems with claims or references
Step 5: Iterative Objection and Refinement
ChatGPT's objections are returned to Claude. Claude incorporates them where appropriate or, in some cases, objects to the objections and proposes alternatives.
The revised paper is then returned to ChatGPT for another review.
This loop repeats until substantive disagreements are exhausted.
Step 6: Completion by Convergence
The paper is considered complete only when both AI systems independently agree that:
- The logic is coherent
- Mathematics and units are correct
- Conclusions are properly supported and scope-limited
- External references are accurate where used
Agreement is not assumed. It is reached through repeated challenge.
Step 7: Author Verification and Intent Alignment
After technical convergence is reached, I perform a careful final review of the paper.
This review is used to confirm that:
- The paper accurately reflects my original premise
- The reasoning aligns with my own thoughts and logic on the subject
- No reframing or wording shifts have altered the intended meaning
This step ensures that what the reader sees represents my conclusions and reasoning, not those of an AI system. The AI functions strictly as a research, drafting, and verification assistant. It does not originate the premise, develop the subject matter, or determine the conclusions of the paper. Those remain entirely my responsibility.
Why This Method Is Worth the Effort
This workflow takes more time than single-shot prompting. That is intentional.
Most complaints about AI unreliability stem from treating AI as a one-step answer machine. A single prompt produces a single response, and if that response contains errors, the user blames the tool.
But serious technical work has never been a one-step process. Drafts are reviewed. Calculations are checked. Conclusions are challenged. The same discipline applies here.
The additional time invested in compartmentalization and iterative review is not overhead. It is the mechanism that transforms AI from a fast but unreliable shortcut into a rigorous and trustworthy workflow component.
For casual questions, single-shot prompting is fine. For publishable work, the cost of errors far exceeds the cost of verification.
Conclusion
This workflow does not rely on trusting AI. It relies on structure, compartmentalization, and iterative adversarial review.
The four-project system prevents contamination and drift during research. The ChatGPT–Claude QA loop ensures that published papers can withstand repeated critical scrutiny.
Evaluating AI based on a single response misses the point. This method evaluates entire chains of reasoning until they converge.
That is the difference between using AI as a novelty and using it as a serious technical tool.