Ultracrepidarianism in the Age of AI: When Ignorance Learns to Sound Intelligent

There is a Latin-derived word for the habit of giving opinions beyond your competence: ultracrepidarianism. It refers to the shoemaker who critiques a painter about his technique. Similarly, the dinner-party guest who corrects the surgeon about a medical procedure, and the co-worker who has read one article and is now a marketing expert.

“The confident sharing of information without real knowledge is an old human flaw. AI has simply institutionalized it.”

The consequences are not abstract; they have real consequences. Patients act on medical misinformation delivered with clinical confidence, googled by their neighbors. Investors make decisions based on AI-generated financial analysis that no qualified human has reviewed. Public policies are shaped by advisors who consult a chatbot instead of a domain expert. Engineers ship systems based on code they did not understand, generated by a tool they trusted unconditionally. In each case, the problem is not that someone lacked access to information. It is that they could not distinguish between information and understanding. And nothing in their environment encouraged them to try. Confidence without competence has always caused harm. At scale, it causes systemic harm.

The Cobbler Has Always Had Opinions

The pattern was familiar long before AI entered the picture. A footballer wins the Champions League and becomes a commentator on tax policy, vaccine science, and geopolitical conflict. A reality television star accumulates Twitter followers and pivots to nutritional advice. A celebrity posts about investment strategies between trending TikTok-content. None of this is new. Confidence has always been mistaken for authority, and society has always rewarded confidence over accuracy.

What is new is the speed and the scale.

Donald Trump has suggested that disinfectant could be used to cure COVID-19, claimed that noise from wind turbines causes cancer, and asserted that tariffs are paid directly by foreign countries rather than by domestic residents. Each statement was delivered with complete certainty. Each was factually wrong. Each reached tens of millions of people before any corrections were issued, if one was issued at all. This is ultracrepidarianism with a microphone, a press corps, and a social media account with 90 million followers.

The problem is not stupidity. It is the structural availability of misleading internet information that confirms almost any ludicrous fact with confidence. When there is no cost for being wrong, and the audience rewards certainty regardless of its accuracy, the incentive is always to assert, never to qualify.

AI tools have now given that same power to everyone.

Speaking Without Knowledge Was Always There

Overconfidence is not new. People have always spoken beyond their knowledge. What kept them in check was friction: forming and expressing an opinion took effort, and visibly wrong opinions carried social cost. That friction is gone.

Generative AI removes the gap between having a thought and sounding authoritative about it.

You no longer need to read the book, build the model, or spend years in a field. You need a prompt and thirty seconds. The output will be fluent, structured, and confident. It will cite things. It will use the right terminology, and it will make you sound like an expert. But it is not expertise since you will never foresee the consequences and nuances of the assertion. You will just act and notice the difference when the damage is done.

Fluency Is Not Knowledge

The core confusion now is production at scale: fluency in a topic and knowledge about the topic feel the same from the outside, but the effect is completely different.

A language model does not understand what it generates. It produces statistically plausible sequences of tokens based on patterns in training data. When it is right, it is not right because it reasoned correctly; it is right because correctness was the most probable output given the input. When it is wrong, it is wrong in the same fluent, confident way when it is right. And on top of this, the commercial nature of these models have the nature of conforming to the user’s intention and question. “That is an excellent question, Dave. And you are right.”

Users who rely on this output are exceptionally pleased by the confidence. They forward the answer. They repeat it in meetings. They publish it. The chain from model output to public statement can be extremely short, and the human in the middle is often not adding verification; they are adding distribution.

AI lowers the cost of sounding informed. It does not lower the cost of being wrong.

Synthetic Authority at Scale

The individual ultracrepidarian was always annoying and occasionally harmful. The systemic, mass AI-generated version is something different.

When an entire culture has access to instant, fluent, authoritative-sounding answers, several things happen simultaneously.

  • Domain specialists find their expertise questioned by people armed with chatbot output.
  • Public discourse fills with confident assertions that are difficult to falsify without significant effort.
  • Workplaces reward speed and surface polish over rigor.
  •  And the people who express doubt, who say “I’m not sure,” “this is complicated,” “you should ask someone with more experience”, start to look slow, uncommitted, or weak.

Intellectual humility becomes a competitive disadvantage. And even worse, domain experts refrain from sharing their critique and own opinions, since the feedback is so overwhelming. The domain experts reported spending time verifying references in suspicious survey submissions, which could have been spent on evaluating substantive contributions. 

This plays out visibly in journalism, consulting, legal drafting, medical queries, and political communication. The common thread is not that the AI made an error. It is that the human in the loop did not have the domain knowledge to catch it, and did not assess the result critically enough to validate the outcome.

What Expertise Actually Is

Expertise is not access to answers. It is the capacity to evaluate answers, to know which questions to ask to validate. Expertise is to distrust the outcome because you have seen it fail before.

That capacity takes time to build. It requires being wrong in consequential situations and learning from it. It requires sustained exposure to a domain’s genuine complexity, not its Wikipedia summary or its ChatGPT synthesis.

AI cannot replicate this. It can only simulate its outputs.

The danger is that a generation of users, in organizations, in governments, in public life, will treat the synthetic answer as the real thing. Not because they are stupid, but because the quick, unvalidated AI answer is good enough to satisfy the need of their boss or client in a good-looking conclusion. They get praised by delivery fast results and do not face the consequences of being wrong.

The Practical Consequence

When everyone can sound like an expert, the social function of expertise collapses. Trust shifts from academic credentials and track records to whoever produces the most persuasive output fastest. Misinformation does not need to be true. It needs to be fluent and shareable before the correction arrives.

Look at the reported USA job numbers. In January 2026, the Bureau of Labor Statistics reported a growth of 584.000 jobs in 2025. Trump confidently stated this to be “the biggest growth in years”. This number was adjusted to 181.000 in February (the lowest growth in 5 years).

Humans have always loved overconfidence. Humans cannot suppress the impulse to speak beyond one’s knowledge. AI has industrialized that impulse, lowered its cost, polished its output, and removed the critical validation friction to share it.

The ethical challenge in current times is not building smarter AI. It is preserving epistemic humility in the people who use it.

What You Can Actually Do About It

This is not a problem that fixes itself. It requires deliberate choices. Choices in how you use AI, how you evaluate information, and how you implement governance. 10 practical handles are listed below:

  1. Distinguish the source from the output. Fluent language is not evidence of understanding.
  2. Before acting on AI-generated content, ask who or what produced it, on what basis, and whether a qualified human has reviewed it. This sounds obvious. Most people skip it.
  3. Reintroduce friction deliberately. The removal of friction is exactly what makes AI dangerous in this context. Build verification steps back in.
  4. Require sourcing for consequential claims. Make “I don’t know” a legitimate and respected answer in your organization.
  5. Examine the producers of the output. Make sure they know their knowledge and understand the consequences, and ask critical questions.
  6. Protect your domain specialists. If your organization has people with genuine deep expertise, do not let them be overruled by someone with a chatbot and a deadline. Create structures that give domain knowledge actual weight in decisions.
  7. Train for calibration, not just critical thinking. This is the ability to accurately assess what you know, what you don’t know, and how confident you should be. These are different things. Calibration is trainable, and it is undervalued.
  8. Read primary sources. AI summaries compress and smooth. They remove the uncertainty, the caveats, the dissenting footnotes. When something matters, go to the original: the study, the report, the primary document. The nuance that gets lost in summarization is often exactly the nuance that changes the conclusion.
  9. Be publicly uncertain when you are uncertain. In professional and public contexts, model epistemic humility visibly. Say what you don’t know. Qualify your claims.
  10. Refer people to better sources when they exist. Overconfidence is contagious, intellectual honesty also is.

The goal is not to distrust AI. It is to use it without losing the judgment that makes its output useful rather than dangerous.

With AI shoemaker is still criticizing the painting, but he is a very good ghostwriter now. The question is whether anyone in the room has the critical thinking skills to question his conclusion and validate his sources.


Discover more from Pragmatic Technology Thinking

Subscribe to get the latest posts sent to your email.

Robbrecht van Amerongen

Robbrecht van Amerongen is a pragmatic technology expert with a passion for real-time data, sustainable IT, and digital innovation. He helps organizations translate complex technological challenges into practical solutions that deliver impact. His focus is on IoT, digital twins, architecture, and transformation in environments where continuity, scalability, and societal relevance come together to create lasting value for organizations.

The Scary Takeover: The Day Your AI Tool Became Your Boss

Leave a Reply