Why We Let Others Think for Us

How Expert Opinion Can Replace Real Understanding, and What We Might Do About It

David Allen LaPoint

PrimerField Foundation

January 1, 2026

Keywords: expert opinion; peer review; thinking for yourself; science publishing; artificial intelligence review

Summary

This paper looks at a common problem: people accepting or rejecting ideas based on who said them, rather than whether they make sense. When someone cannot explain why a claim is true—they just know that an expert said it—their opinion does not really add anything useful. They are just repeating what they heard, not actually understanding it.

We examine how scientific peer review (the process where experts check each other's work before publication) tends to be cautious about new ideas. This caution has good reasons behind it, but it can also slow down the acceptance of correct discoveries. We look at several famous examples where important scientific findings were rejected for years or decades before being accepted.

Finally, we explore one possible improvement: using multiple AI systems to check scientific papers for errors and inconsistencies. This would not replace human judgment, but might help catch mistakes faster and make the review process more transparent.

1. The Problem: Letting Others Do Your Thinking

Science depends on people making careful judgments about what is true and what is not. But no single person can know everything. Doctors trust physicists about atoms. Physicists trust biologists about cells. This division of mental labor is normal and necessary.

The problem we are discussing is different. It happens when people stop thinking about whether an idea makes sense and instead only care about who said it. If a famous expert supports something, they believe it. If no expert has endorsed it, they dismiss it. They have replaced thinking with permission-seeking.

This paper calls this behavior authority-dependent judgment. It is not about normal trust in experts—that is often reasonable. It is about refusing to engage with ideas at all unless the "right" people have already approved them.

2. When Trusting Experts Makes Sense

Before criticizing over-reliance on authority, we should acknowledge when trusting experts is clearly the right choice. There are many situations where you simply cannot verify things yourself, and trusting specialists is the only sensible option.

Examples where expert trust clearly works better than individual judgment:

• Particle physics equipment: The machines that detect subatomic particles require hundreds of specialists to build and calibrate. No single person can check all their work.

• Medical research reviews: When doctors combine results from many studies, they can see patterns that no single study could show. Individual doctors must trust these combined analyses.

• Safety certification: When engineers approve a bridge or airplane as safe, that approval represents decades of accumulated knowledge that no individual designer could recreate from scratch.

Healthy trust in experts includes these features:

• You have some idea why the expert's position makes sense, even if you cannot check every detail

• You know what kind of evidence would change the expert's mind (and yours)

• You understand the limits of the expert's claim—what it does and does not cover

• Your trust is based on the expert's track record, not just their title or status

The critique in this paper targets trust that lacks these features—blind acceptance based purely on who is speaking.

3. How Scientific Ideas Spread (and Get Stuck)

Scientists study how knowledge spreads through communities. What they find is not always flattering. In theory, scientists are supposed to follow the evidence wherever it leads. In practice, scientists are human beings with careers to protect, reputations to maintain, and limited time to evaluate new ideas.

The philosopher of science Thomas Kuhn pointed out that most scientific work happens within an accepted framework—a set of shared assumptions that everyone in the field takes for granted. Scientists who challenge that framework often face resistance, even when they have good evidence. The framework changes only when problems pile up so high that ignoring them becomes impossible.

Modern research has confirmed that many published scientific findings cannot be reproduced when other scientists try to repeat the experiments. This suggests that the quality-control systems in science are not working as well as we might hope.

None of this means that people who disagree with experts are usually right—they are usually wrong. But it does mean that "the experts agree" is not the same thing as "this is true." Sometimes the experts are mistaken, and their agreement reflects social pressure rather than careful evaluation.

4. Different Names for the Same Problem

The habit of letting others do your thinking goes by several names. Each name emphasizes a slightly different aspect of the problem:

Deferential thinking: Automatically accepting what higher-status people say

Appeal to authority: Treating "an expert said so" as if it were a reason

Credential worship: Caring more about degrees and titles than about evidence

Gatekeeping: Controlling who is allowed to be taken seriously

Second-hand thinking: Following what approved thinkers say instead of checking the evidence yourself

Peer-review worship: Refusing to consider any idea that has not been formally published

5. Why Authority-Based Opinions Add Nothing

Here is the core problem: If you believe something only because an expert said it, your belief does not contain any additional information. You are just passing along what you heard. You have not verified anything or added any understanding.

Worse, if you cannot explain why the claim might be true, you also cannot tell whether the expert got it right this time. You do not know if the expert is talking about their area of specialty. You cannot spot conflicts of interest. You cannot tell if what the expert actually said matches what people claim they said.

Definition: A low-value opinion is a confident statement about whether something is true or false, made by someone who cannot explain: (1) the reasons behind the claim, (2) the limits of the claim, or (3) what evidence would prove the claim wrong.

This definition applies to confident pronouncements about correctness. It does not apply to reasonable, limited trust—like saying "my doctor recommended this medicine, and I trust her judgment based on her track record, though I know she could be wrong." That kind of limited trust acknowledges uncertainty and is perfectly reasonable.

The problem is when people make strong claims—"This is definitely true" or "That is clearly nonsense"—based purely on what authorities have said, without any ability to evaluate the underlying reasoning. Such opinions may help coordinate social agreement, but they do not actually track truth.

6. "It Has Not Been Peer-Reviewed"

A common modern version of authority-dependence is refusing to engage with any idea that has not passed formal peer review. Peer review is the process where scientific papers are checked by other experts before publication. It serves a real purpose: catching errors, improving clarity, and maintaining standards.

But peer review is a filter, not a truth detector. It catches some mistakes and misses others. Important discoveries have been rejected by peer review, and flawed papers have been approved. Treating peer review as a requirement for attention—rather than as one useful signal among several—is a mistake.

Three different attitudes toward peer review:

• Reasonable filtering: "I have limited time. I will prioritize peer-reviewed work, but I am open to looking at other evidence if it seems important." This is sensible.

• Reasonable preference: "Peer-reviewed work is more likely to be reliable, though not guaranteed. I understand why and what the limits are." This is also sensible.

• Blind refusal: "I will not even look at this because it has not been peer-reviewed. It is worthless by definition." This is an error.

One more distinction matters: There is a difference between "I have not evaluated this because I lack time" and "This is false because it lacks peer review." The first is a practical statement about resource limits. The second is a claim about truth that is not supported by the mere absence of peer review.

7. Why Peer Review Tends to Reject New Ideas

Peer review has a built-in bias toward caution, especially regarding genuinely new ideas. This is not because reviewers are malicious—there are understandable reasons for it:

Unequal risk: Reviewers face different consequences for different mistakes. If they approve a bad paper, they might be blamed when problems emerge. If they reject a good paper, nobody usually notices. This makes rejection the safer choice.

Mental effort: Evaluating a truly new idea requires more work than evaluating routine work that fits existing frameworks. Reviewers are busy people who may not invest that extra effort.

Framework loyalty: Reviewers are selected from established experts in the field. They naturally tend to favor work that fits the approaches they already use and understand.

This caution has real benefits. Most unconventional ideas are wrong, and the filtering function prevents a lot of errors from spreading. But the same caution can delay recognition of genuine breakthroughs.

8. Famous Examples of Rejected Discoveries

Important note: The following examples are not typical. Most ideas that experts reject are correctly rejected. These are famous precisely because they are exceptions—cases where the experts were wrong and the outsider was right. They illustrate that institutional rejection can sometimes be mistaken, not that it usually is.

Handwashing to Prevent Infection (1840s)

Ignaz Semmelweis showed that when doctors washed their hands with chlorine solution, far fewer mothers died after childbirth. His fellow doctors rejected this finding—partly because it implied they had been killing their own patients, and partly because it contradicted the then-popular theory that bad air caused disease. Semmelweis died in a mental institution in 1865. Germ theory and handwashing were not accepted until decades later.

Continental Drift (1912)

Alfred Wegener proposed that the continents had once been connected and had drifted apart over millions of years. He had good evidence—matching coastlines, similar fossils on different continents—but no explanation for how continents could move through solid rock. Geologists dismissed his idea. It was not accepted until the 1960s, when evidence of seafloor spreading provided the missing mechanism.

Bacteria Causing Ulcers (1980s)

Barry Marshall and Robin Warren proposed that stomach ulcers were caused by bacterial infection, not stress or spicy food. The medical establishment rejected this as implausible—everyone knew that bacteria could not survive in stomach acid. Marshall eventually drank a solution of the bacteria to prove his point, giving himself an ulcer and then curing it with antibiotics. He and Warren received the Nobel Prize in 2005, about twenty years after their initial findings.

Quasicrystals (1982)

Dan Shechtman discovered a crystal structure that supposedly could not exist according to established crystallography. The famous chemist Linus Pauling reportedly said "there are no quasicrystals, only quasi-scientists." Shechtman received the Nobel Prize in Chemistry in 2011.

Magnetohydrodynamics (1940s)

Hannes Alfvén developed the theory of how electrically conducting fluids behave in magnetic fields. This work was largely ignored by mainstream astrophysicists for nearly two decades. Alfvén received the Nobel Prize in Physics in 1970, and his theory is now fundamental to plasma physics and our understanding of space weather.

9. How Rejected Ideas Become "Obvious"

A common pattern repeats across these examples:

1. Someone proposes an idea that conflicts with current thinking

2. Institutional filters (peer review, expert opinion) reject or ignore the idea

3. Supporting evidence gradually accumulates

4. High-status scientists eventually accept the new evidence

5. The idea is rapidly treated as obviously correct—as if it had always been clear

After this cascade, people who follow expert opinion often act as if the conclusion was obvious all along. The years of resistance disappear from collective memory.

10. How to Tell If Someone Is Actually Thinking

The following questions help distinguish people who have actually evaluated an idea from people who are just relying on authority. These are not tests of intelligence or legitimacy—reasonable people may fail some questions in specific situations. They are indicators that, taken together, suggest whether a judgment reflects real understanding.

Can you explain WHY this claim is true?

Independent thinker: Gives reasons

Authority follower: Names an authority

What would change your mind?

Independent thinker: Describes evidence

Authority follower: Cannot say

What are the limits of this claim?

Independent thinker: States boundaries

Authority follower: Cannot say

Would you change position if experts changed?

Independent thinker: Not automatically

Authority follower: Yes

Will you evaluate ideas before peer review?

Independent thinker: Yes, if warranted

Authority follower: No

11. Technology Changes How Knowledge Spreads

Throughout history, new technologies have changed who gets to share knowledge and how quickly ideas spread. The printing press broke the monopoly of hand-copied manuscripts. The internet broke the monopoly of traditional publishers.

Today, scientific preprint servers like arXiv allow researchers to share their work immediately, without waiting for peer review. Discussion happens in public, and mistakes can be caught quickly by many readers rather than slowly by a few reviewers. This does not replace quality control, but it changes when and how that control happens.

12. Could AI Help Check Scientific Papers?

Note: The following idea is exploratory. It is not a proven solution or a recommendation. It explores one possible direction that might be worth investigating.

The problems described above raise a practical question: Could we preserve quality control while reducing bias and delay? Recent advances in artificial intelligence suggest one possibility worth exploring.

The Current Situation

Traditional peer review relies on a small number of human reviewers. These reviewers are themselves part of the established framework, face unequal career risks for different types of errors, and make decisions through an opaque process.

What AI Could Check

Current AI systems can rapidly check certain things: Are the math calculations correct? Does the logic hold together? Do the cited sources actually say what the paper claims they say? Does the paper contradict itself? These are core quality-control functions that do not require human judgment about importance or theory preference.

Cross-Checking to Prevent AI Bias

To avoid replacing human gatekeeping with AI gatekeeping, any such system should use multiple AI reviewers from different sources. Each AI would generate a report. Then each AI would check the other AIs' reports. The final output would show:

• Issues all AIs agreed on

• Issues most AIs agreed on, with noted disagreements

• Issues where AIs disagreed without resolution

• Issues that were flagged but then withdrawn after reconsideration

Disagreement would be preserved, not hidden. The system would not declare papers "correct" or "incorrect"—only flag potential issues for human attention.

Limitations and Problems

This approach has real limitations that must be acknowledged:

• Shared blind spots: AIs trained on similar data may share similar biases, even if they come from different companies.

• Training bias: If certain frameworks are underrepresented in training data, all AIs may be biased against them.

• Question sensitivity: AI responses depend heavily on how questions are asked. Bad questions produce bad answers.

• False confidence: AIs sometimes express high confidence in wrong answers, and cross-checking may not always catch this.

• Citation errors: AIs may incorrectly confirm that sources exist or say what papers claim they say.

• Cannot determine truth: AI review can only check internal consistency and accuracy. It cannot determine whether a claim about the world is actually true—that requires experiments, replication, and time.

What AI Review Would and Would Not Do

AI Review CAN:

• Check math and logic

• Verify citations exist and say what claimed

• Check for internal contradictions

• Flag scope problems (claims vs. evidence)

• Produce transparent, auditable output

AI Review CANNOT:

• Determine if a theory is true

• Judge importance of findings

• Choose between competing theories

• Replace peer review for publication

• Replace experiments and replication

Addressing AI Bias

AI systems will inevitably contain biases from their training. However, well-designed review prompts can help distinguish between "this contradicts established theory" (which might be either a problem or a genuine discovery) and "this contains a logical error" (which is always a problem regardless of theory). Furthermore, users can always submit papers to different AI systems or specify different interpretive frameworks. This makes biases visible and adjustable rather than hidden.

13. Conclusion

Authority-dependent judgment replaces understanding with hierarchy. Instead of asking "Does this make sense?" it asks "Who said it?" This is a natural human tendency, and in many situations trusting experts is perfectly reasonable. But when people make confident claims about truth or falsity without any ability to explain the underlying reasoning, their opinions add nothing beyond the authority's original statement.

Peer review is a valuable institution, but it is a filter with known limitations, not a truth machine. It tends toward caution, especially regarding genuinely new ideas. The historical examples in this paper—handwashing, continental drift, bacterial ulcers, quasicrystals, magnetohydrodynamics—illustrate that this caution can delay correct discoveries by years or decades. These are exceptional cases, not typical ones, but they demonstrate that the mechanisms exist.

Cross-validated AI review is explored here as one possible future direction. It would not solve all problems and has significant limitations of its own. But if implemented carefully, it might help catch errors faster, make the review process more transparent, and return more power to people who can actually evaluate reasons, scope, and evidence for themselves.

Whether such systems should actually be adopted is a separate question that requires careful study and broad discussion.

References

Alfvén, H. (1942). Existence of electromagnetic-hydrodynamic waves. Nature, 150(3805), 405-406.

Carter, K. C. (1983). The Etiology, Concept, and Prophylaxis of Childbed Fever. University of Wisconsin Press.

Goldman, A. I. (1999). Knowledge in a Social World. Oxford University Press.

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.

Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.

Marshall, B. J. (2005). Helicobacter connections. Nobel Lecture, December 8, 2005.

Merton, R. K. (1973). The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.

Oreskes, N. (1999). The Rejection of Continental Drift: Theory and Method in American Earth Science. Oxford University Press.

Popper, K. (1959). The Logic of Scientific Discovery. Hutchinson.

Shechtman, D., Blech, I., Gratias, D., & Cahn, J. W. (1984). Metallic phase with long-range orientational order and no translational symmetry. Physical Review Letters, 53(20), 1951.