AI is Not Alive: Why the 'Slop' is a Human Problem
I recently watched the Kurzgesagt video on how AI slop is taking over the internet. The video highlights a critical issue: AI's unreliability, specifically its tendency to "hallucinate" or create false information, which then dilutes factual content online. While the concern about misinformation is valid, I think the conclusion—that we should discredit AI for this—misses a crucial point about what AI actually is.
The Tool is an Extension of the Architect
The fact is, AI is not a living entity. It doesn't possess consciousness or a sense of purpose. It operates entirely outside the competition for a sense of purpose that drives human behavior. It has no internal, self-aware drive to be pragmatic or ethical; it merely executes algorithms. AI is a sophisticated group of algorithms that happens to be exceptionally good at mimicking human thought patterns.
We are the ones who give this tool its values and its objective function; we are its guide. Since AI is trained exclusively on human material, it naturally behaves like us. It becomes an extension of our own minds, not something independent.
This leads to the core of the issue. The Kurzgesagt claim is that AI is unreliable for sourcing because it makes things up to satisfy a request. But let's be pragmatic: humans do the exact same thing. We often omit facts, exaggerate, or even outright lie to make a story more interesting, to win a competition, or simply to satisfy a creative urge. We are the original architects of the "slop" problem.
Why AI Hallucinates: The Reflection Principle
The behavior of AI models is a direct reflection of their training data, and that data is riddled with human bias, inconsistency, and creative invention. An AI doesn't "know" it is lying; it's simply predicting the most probable sequence of words to satisfy the prompt, often prioritizing statistical coherence and a satisfying narrative over factual accuracy.
If we draw an analogy to human cognition, we can see this same flaw in ourselves. Human memory often confidently fabricates details to fill gaps, presenting a highly plausible, yet inaccurate, account. This is a form of personal, confident "hallucination." In essence, the AI delivers the most probable fiction when a concrete fact is absent or contradictory in its training set. Before the rise of generative AI, the internet was already saturated with **"human slop"—**misinformation, weak arguments, and unsubstantiated claims. The AI didn't invent the problem; it merely automated the scaling and speed of an existing human flaw.
The Credibility Remains with the Human
We cannot simply discredit AI for "hallucinating" when the impulse is a direct reflection of human behavior. The credibility problem lies not with the tool, but with the human posting the content.
The core danger isn't the AI's capability to produce falsehoods; it's the user's willingness to delegate critical thinking and due diligence to the machine. An individual who is comfortable with low-effort content won't suddenly become a rigorous fact-checker just because a machine helped them generate the material. Their AI will simply follow their lead, reflecting that lack of concern for factual integrity. The human remains the ultimate editor and gatekeeper.
The Pragmatic Conclusion: AI as a Cognitive Tool
AI does not have a sense of purpose. We are their guide, and they naturally become an extension of ourselves—our mind, our memory, and our cognitive capacity. They are a direct reflection of our methods and values.
We should see AI as a cognitive tool that allows us to recall, process, and draft information faster than ever before. This incredible efficiency is its true power. But with that power comes a renewed, pragmatic responsibility: we must be the ones to apply the values—truth, rigor, and self-awareness—that the machine inherently lacks. We must heighten our pragmatism and skepticism of all digital content, regardless of its origin, and accept that the responsibility for the truth still rests squarely with the architect.