Original Research and Scientific Commentary

The papers presented below reflect the original thoughts, reasoning, and logical framework of David Allen LaPoint, President of the PrimerField Foundation. While these papers were prepared with the assistance of two independent AI systems that cross-check one another for errors in logic, mathematics, scope, and clarity, AI is never treated as an authority or source of conclusions. Each paper undergoes a final review by the author to ensure that the finished work accurately represents his own reasoning and conclusions, not the opinions or interpretations of AI systems. For a detailed explanation of how artificial intelligence is incorporated into the workflow used to produce these papers, please see the introductory papers, A Structured Method for Using AI Correctly” and “Dual-Model Adversarial Methodology.


A Structured Method for Using AI Correctly

This paper presents a rigorous, repeatable workflow for producing accurate technical writing with artificial intelligence. It explains how compartmentalizing work into four isolated ChatGPT projects prevents assumption bleed, confirmation bias, and conceptual drift, while an iterative adversarial QA loop between ChatGPT and Claude exposes errors in logic, mathematics, scope, and wording. The paper emphasizes that AI is never treated as an authority, but as a tool whose outputs must withstand repeated independent challenge. Final responsibility for premises, reasoning, and conclusions remains entirely with the author.

Plain Language Version (PDF)

Plain Language Version (Mobile-Friendly)

Dual-Model Adversarial Methodology

This paper documents a deliberate, repeatable methodology for developing rigorous theoretical work using artificial intelligence without ceding epistemic authority. It describes a dual-model adversarial workflow in which two behaviorally distinct large language models—ChatGPT and Claude—are assigned non-overlapping roles: one focused on synthesis and aggressive expression, the other on hostile review, constraint enforcement, and logical audit. Disagreement between the models is treated as a diagnostic signal that exposes hidden assumptions, definitional ambiguity, scope creep, and unsupported inferences that routinely survive single-model workflows and conventional peer review. The method explicitly prohibits self-validation, requires full document re-audit after each revision, and maintains canon and final judgment entirely under human control. The result is a framework that produces unusually tight internal logical consistency while remaining explicit about what it does not replace, including empirical validation and experimental replication.

[Technical Paper (PDF)

[Plain Language Version (Mobile-Friendly)]

Truth and Authority
This paper examines a foundational but often overlooked distinction: the difference between what is true and who is speaking. It argues that truth is determined solely by whether a statement matches reality, not by credentials, status, or authority. Using clear examples from science and history, the paper explains why expertise is valuable without being infallible, why appeals to authority can fail, and why evidence—not reputation—is the final judge of what is real. It is written for general readers and is intended to clarify how truth should be evaluated in science, medicine, and public discourse.

Technical Paper (PDF)

Plain Language Version (PDF)

Plain Language Version (Mobile-Friendly)

Why “Trust the Experts” Isn’t Always Good Enough

This paper examines the distinction between informed trust in expertise and what the author terms authority-dependent judgment—the refusal to evaluate ideas unless they are endorsed by recognized authorities. Using historical examples from science and medicine, it argues that appeals to credentials, peer review, or consensus are not substitutes for evidence, logic, and direct engagement with arguments. The paper highlights limitations of traditional peer review, documents multiple cases where experts were wrong for decades, and proposes AI-assisted cross-checking as a more transparent way to audit logic, math, and claims without suppressing unconventional ideas. It concludes that critical thinking requires understanding reasons and evidence, not merely deferring judgment to authority.

Technical Paper (PDF)

Plain Language Version (PDF)

Plain Language Version (Mobile-Friendly)

Why Math Can’t Prove Reality

This paper explains a critical but frequently misunderstood limitation of mathematics: while math is an extraordinarily powerful tool for describing and predicting behavior, it cannot by itself establish what is physically real. Through clear examples from physics and history, the paper shows that mathematical consistency, elegance, and predictive success do not guarantee that a model corresponds to reality. The distinction between mathematical possibility and physical existence is emphasized, demonstrating that only observation and experiment can determine which mathematical descriptions, if any, apply to the real universe.

Technical Paper (PDF)

Plain Language Version (PDF)

Plain Language Version (Mobile-Friendly)

Is the Standard Model Really a Sub-Standard Model?

This paper presents a rigorous, evidence-based critique of the Standard Model of particle physics, arguing that its most serious problems are not unresolved details but foundational failures. It examines seven major anomalies—including the measurement problem, dark matter and dark energy, the hierarchy problem, matter–antimatter asymmetry, the cosmological constant catastrophe, incompatibility with general relativity, and neutrino masses—and shows that each represents a structural breakdown rather than a gap awaiting extension. The paper distinguishes predictive success from physical understanding and argues that reliance on adjustable placeholders and post hoc modifications signals a paradigm in crisis. It is written for readers seeking a clear, uncompromising evaluation of whether the Standard Model still deserves its foundational status.

Technical Paper (PDF)

Plain Language Version (PDF)

Plain Language Version (Mobile-Friendly)

Post-Higgs Silence

In 2012, the Higgs boson discovery was celebrated as confirmation of the Standard Model. Many expected it would open doors to new physics—new particles, new forces. More than a decade later, that hasn't happened. This paper examines what the Higgs discovery actually confirmed, introduces the concept of "framework lock" in scientific interpretation, and argues that precision is not the same as understanding. The absence of new discoveries is itself meaningful information.

Technical Paper (PDF)

Plain Language Version (PDF)

Plain Language Version (Mobile-Friendly)

Transverse Sensitivity Scale of Photons

When light passes near the edge of an object, the edge affects where photons are detected—even at surprising distances. This paper answers a simple question: how far sideways from an edge can light still be affected? The answer is about 2 millimeters for visible light, which represents thousands of wavelengths. This work establishes precise, quantitative constraints that any theory of light must be able to reproduce.

Technical Paper (PDF)

Plain Language Version (PDF)

Plain Language Version (Mobile-Friendly)

When Scientists Get It Wrong About Who’s Right

This paper examines how scientifically correct ideas can be ignored or dismissed for decades—not because they are mathematically wrong, but because they conflict with prevailing assumptions, disciplinary boundaries, or expectations about who is “authoritative.” Using the case of Hannes Alfvén and the delayed acceptance of Alfvén waves, the paper shows that evidence and correctness alone do not guarantee acceptance. It explores how authority, consensus inertia, and psychological resistance shape scientific judgment, and compares Alfvén’s experience with other historical cases such as continental drift, meteorites, and early quantum theory. The paper concludes that skepticism is necessary but can fail when correct ideas are ignored rather than critically tested, demonstrating that scientific consensus is a human process rather than a flawless truth filter.

Technical Paper (PDF)

Plain Language Version (PDF)

Plain Languae Version (Mobile-Friendly)